Hi i have made this tcl script to find the ids by using mql in the database, but my script is not working - database

The down below is TCL script I am not sure about the script could some please help me to solve the issue.
I am getting one error - should be "proc name args body"
tcl;
proc {
puts "########### Trying to find the Id's ###########"
mql start transaction
set Id {mql temp query bus 'AIRBUS_E_Document_ElectricalDiagram' * * where 'attribute[clau*].value==FALSE' select id;}
set error[Catch {proc $Id} sResult]
If {$error == 0}{
puts "$Id"
}else{
puts "Error -$sResult"
mql abort transaction
}
puts "######## Finding Id's are Completed #########"
}
Please let me if changes are required in here.

proc documentation -- missing procname and arglist
if documentation
Tcl is a word-oriented language, so it is vital that arguments to commands are separated by whitespace
If {$error == 0}{ ==> if {$error == 0} {
}else{ ==> } else {
set error[Catch {proc $Id} sResult]
again, missing space after "error"
I don't know what you want do to here.
See also the rules of Tcl syntax -- there's only 12 of them, so spend some time reading that.

Related

How to check if file exists in Eiffel

feature
open_file_sample
local
l_file: UNIX_FILE_INFO
l_path: STRING
do
make
l_path := "/var/log/syslog"
l_file.update (l_path)
if l_file.parent_directory.exists and then l_file.parent_directory.is_writtable then
create l_file.make
end
-- AS the above statement doesn't exist!
check
syslog_file_exists_and_is_readable: l_file.exists and then l_file.is_readable
end
end
Is this the proper way to check for file existence in Eiffel?
I was wondering if there is a way not to create 2 objects. I'll complete my check with following statement:
define path `l_file_path := "/some/path/with_file.log"
check if parent directory exists and has rights to write into
create log file
The problem when accessing the file system is that the property of a file or directory may have changed between the time you query it and the time you want to use it (even if it's only a small fraction of a second). Because of that, assertions in Eiffel of the form:
f (a_file: RAW_FILE)
require
a_file.is_writable
do
a_file.open_write
may be violated. In the Gobo Eiffel libraries, instead of checking whether a file can be opened in write mode before actually opening it, the revert approach was chosen: try to open the file, and check whether it was opened successfully.
f (a_pathname: STRING)
local
l_file: KL_TEXT_OUTPUT_FILE
do
create l_file.make (a_pathname)
l_file.recursive_open_write
if l_file.is_open_write then
-- Write to the file.
l_file.close
else
-- Report the problem.
end
Note that it uses recursive_open_writeand not just open_write so that missing directories in the path get created as well.
You can use
{FILE_UTILITIES}.file_exists (the_file_name)
or
(create {RAW_FILE}.make_with_name (the_file_name)).exists
do
if not l_file.exists then
print ("error: '" + l_path + "' does not exist%N")
else
...
You can something similar to this
My final solution is following, and is subject to critics, I personnaly find it very complicated in comparison to more low level languages and libs (as bash for ex)
log_file_path: detachable PATH
-- Attached if can be created
local
l_file: UNIX_FILE_INFO
l_path, l_parent_dir: PATH
l_fu: FILE_UTILITIES
do
create l_fu
-- Parent directory check
create l_path.make_from_string ({APP_CONFIGURATION}.application_log_file_path)
l_parent_dir := l_path.parent
if not l_fu.directory_exists (l_parent_dir.out) then
l_fu.create_directory_path (l_parent_dir)
end
create l_file.make
l_file.update (l_parent_dir.out)
if not l_file.exists or
l_file.is_access_writable
then
io.putstring ("Error: " + log_file_path_string + " parent directory is not writtable and cannot be created")
check
parent_dir_exists_and_is_writtable: False
end
else
Result := l_path
end
ensure
file_name_could_be_created: Result /= Void
end

How to do fetch pages when querying universe database using .net sdk and sql

I am connecting to a universe database (from rocket software) using their .net driver. I would like to fetch data on demand on user request per page i.e. do pagination. With other databases we could use (offset fetch) but universe db doesn't seem to support it. It does not recognize keyword offset, something like
SELECT NAME, AGE FROM CONTACTS WHERE AGE > 25 offset 5 sample 5 does not work. I does not recognize those keywords and there is no good documentation :-(
Note: Although it is traditionally a multi-value database, the one I am using does not use multi value types but the structure is normalized.
This is certainly one of the shortcomings of this platform. I have worked through this in the past with the something similar to the following subroutine. I had to remove a bunch of stuff for brevity but this compiles so it must work completely bug free, right?
Caveats: You need to have #SELECT DICT item in each file you want to use this with containing all of the columns you want to return.
Multivalues get a little tricky. I had flattened the data I was using this with so I did not run into that problem, but this does not do UNNESTs.
Also you might want to add a value saying how many records there are total and possibly work out some kind of token passing and list saving to cut down on executing the query each time you run it but that gets much, much deeper than the basic question at hand.
SUBROUTINE SQLSelectWithOffset(TableName,UVWithClause,Starting,Offset)
***********************************************************************
* PROGRAM ID: SQLSelectWithOffset
*
* PROGRAM TITLE: SQLSelectWithOffset
*
* DESCRIPTION: Universe doesn't support sql commands using starting and offset
* which makes life hard when you want all of a file
* but you choke on the size. Tokens allow for the selectlist to be saved
* TableName = UV FIle to select on. If this is blank program will return the number of records remaining
* UVWithClause = Your critera, WITH or BY criteria you want in a sort select.
* Starting = Holds you place in line
* Offest = How many records to return
************************************************************************
$INCLUDE UNIVERSE.INCLUDE ODBC.H
RETURN.LIST = ""
IF Starting = "" or Starting < 1 THEN
Starting = 1
END
GOSUB GET.MASTER.LIST
FOR X=Starting TO Offset
ID = EXTRACT(FULL.LIST,X,0,0)
IF ID = "" THEN CONTINUE
RETURN.LIST<-1> = ID
NEXT X
SELECT RETURN.LIST TO 9
SQLSTMT ="SELECT * FROM ":TableName:" SLIST 9"
ST=SQLExecDirect(#HSTMT, SQLSTMT)
RETURN
GET.MASTER.LIST:
STMT = "SSELECT ":TableName
IF UVWithClause NE "" THEN
STMT := " ":UVWithClause
END
EXECUTE "CLEARSELECT"
EXECUTE STMT
READLIST FULL.LIST ELSE FULL.LIST = ""
RETURN
END
Good luck, please only use this information for good!

Reading HL7 data from file and inserting it into the table SQL Server

Could someone please help me on capturing the HL7 data into the SQL server using MIRTHCONNECT. I was searching for some examples and i was not able to find any tutorials which demonstrates looping through multiple segments. I was able to insert records into the database by going through the tutorials but still i`m stuck in doing the looping process.
Could someone please share me some links or give me some ideas so that i can go through those .
This is my initial thought to loop through each segment since i assume that Mirth connect reads a file line by line.
Thanks for the help
LOGIC -(I`m not sure whether this will be the right approach)
for each (seg in RAWFILE) {
if (seg.name().toString() == "MSH") {
insert into table values ();
}
if (seg.name().toString() == "PID") {
INSERT INTO TABLE2 VALUES ();
}
}
sample RAW DATA
MSH|^~&|LAB|CCF|||20040920080937||ORM^O01|42640000 009|P|2.3|
PID|||56797971||RESULTSREVIEW^TESTPATIENT^||196505 25|M||||||||||56797971|
PV1||O|UNKO^|||||
ORC|RE||A0203809||IP|||||||
OBR|1|A0203809|A0203809|400090^Complete Blood Count|||200609240000|||||||200609240847||deleted^^ ^^MD^^^^^^||||||200609241055|||P
OBX|1|ST|40010^White Blood Count (WBC) (x1000)||PENDING||||||P
OBX|2|ST|40020^Red Blood Count (RBC)||PENDING||||||P
ORC|RE||A0203809||CM|||||||
OBR|2|A0203809|A0203809|650300^Depakene (Valproic Acid) Level|||200609240000|||||||200609240847||^deleted^ ^^^MD^^^^^^||||||200609241055|||F
OBX|3|NM|65030^Depakene (Valproic Acid) Level||76.8|ug/ml|50-100||||F|||200609241054||
Sounds like you've got the db insertion working and you're having questions on how to handle repeating segments. Here is some code that I use in mirth for handling repeating segments. Of course your milage may vary but this should accomplish what you are wanting.
var segCount = 0;
// Loop through message and count number of OBX segments
for each (segment in msg.children()) {
if(segment.name() === 'OBX') {
segCount++;
}
}
// Make changes if there are OBX segments
if (segCount > 0) {
for (var i = 0; i < segCount; i++) {
tmp=msg;
// Add this segment to the database
insert into table values ();
// Here I am changing each OBX-5.1 to contain normal if OBX-3.1 is 'Some Text'
if (msg['OBX'][i]['OBX.3']['OBX.3.1'].toString() === 'Some text') {
tmp['OBX']['OBX.5']['OBX.5.1'] = 'Normal';
}
}
}
You want to pull the information from an HL7 file and insert into a DB regardless of type.
So, Create one channel with inbound message type as HL7, regardless of you taking HL7 message from file or an open TCP/IP connection.
Go to source, go to transformer, create a JS transformer, supply an HL7 message in the inbound template, and now extract information from the message and store it in variables. Something like below.
var firstname=msg[PID][PID.5][PID.5.2].toString();
A helpful tip is to drag and drop the elements from the inbound message template and store in the variable.
Now, move this variable in the channel Map, so that we can capture it in destination.
channelMap.put('first_name',firstname);
Now for the second part,
Go to destination of the same channel and create one DB writer that writes information to a DB.
Don't select Use Javascript, instead just write your INSERT query, something like below.
INSERT INTO PATIENT(first_name) VALUES (channelMap.get('first_name');
There is whole lot of documentation available at Mirth to help you with the DB writer.
Hope this helps!

Name Not Defined Error within a For Loop?

I'm new to Python (this is my second language), so hopefully my question can help somebody else also struggling with something similar.
For reference, I'm using Netbeans IDE 6.9.1 and running Python 2.7.3.
A bit of a backstory, I'm studying a transportation problem for thesis, and I need to generate a network of cities (nodes) and roads (arcs). What I'm doing with the code below is generating a string that I'll pass to an open(file,'w') operation, where I'll write randomly-generated data to a text file.
For example: FNodes = '\DijkstraShortestPath\Data\100Nodes\Node5.txt'
I keep getting a "name 'Fnodes' not defined" error when I run this code below.
I've spent hours trying to figure this out; shouldn't this be defined? After all, I did write "FNodes = bla bla bla".
I tried taking it out of the loop, but that brought up the same errors with 'item' and 'replications' since they are used in the FNodes string. This makes sense since they are defined in the for loop.
If you could help a new guy understand this syntax mistake, that'd be great.
Thanks for your help.
R = 10 #Number of replications (trials)
NumNodes = [50,100,150] #Number of nodes (cities). Also the names of 3 folders.
for item in NumNodes: #Cycle through 50, 100, 150 nodes for folder path XXXNodes
for replications in range(R): #Cycle through fileR.txt by replication number
fNodes = "\\DijkstraShortestPath\\Data\\" + str(item) + "Nodes\\Node" \
+ str(replications + 1) + ".txt"
print FNodes #This is a debugging step for me so I can see what's happening
#Write to files and stuff...
I noticed the error and was able to move on.
It was a capitalization error; print FNodes should have been print fNodes

How do I make multiple database queries more efficient in Perl?

I have a queries that reside in multiple methods each (query) of which can contain multiple parameters. I am trying to reduce file size and line count to make it more maintainable. Below is such an occurrence:
$sql_update = qq { UPDATE database.table
SET column = 'UPDATE!'
WHERE id = ?
};
$sth_update = $dbh->prepare($sql_update);
if ($dbh->err) {
my $error = "Could not prepare statement. Error: ". $dbh->errstr ." Exiting at line " . __LINE__;
print "$error\n";
die;
}
$sth_rnupdate->execute($parameter);
if ($dbh->err) {
my $error = "Could not execute statement. Error: ". $dbh->errstr ." Exiting at line " . __LINE__;
print "$error\n";
die;
}
This is just one example, however, there are various other select examples that contain just the one parameter to be passed in, however there is also some with two or more parameters. I guess I am just wondering would it be possible to encapsulate this all into a function/method, pass in an array of parameters, how would the parameters be populated into the execute() function?
If this was possible I could write a method that you simply just pass in the SQL query and parameters and get back a reference to the fetched records. Does this sound safe at all?
If line-count and maintainable code is your only goal, your best bet would be to use any one of the several fine ORM frameworks/libraries available. Class::DBI and DBIx::Class are two fine starting points. Just in case, you are worried about spending additional time to learn these modules - dont: It took me just one afternoon to get started and productive. Using Class::DBI for example your example is just one line:
Table->retrieve(id => $parameter)->column('UPDATE!')->update;
The only down-side (if that) of these frameworks is that very complicated SQL statements required writing custom methods learning which may take you some additional time (not too much) to get around.
No sense in checking for errors after every single database call. How tedious!
Instead, when you connect to the database, set the RaiseError option to true. Then if a database error occurs, an exception will be thrown. If you do not catch it (in an eval{} block), your program will die with a message, similar to what you were doing manually above.
The "execute" function does accept an array containing all your parameters.
You just have to find a way to indicate which statement handle you want to execute and you're done ...
It would be much better to keep your statement handles somewhere because if you create a new one each time and prepare it each time you don't really rip the benefits of "prepare" ...
About returning all rows you can do that ( something like "while fetchrow_hashref push" ) be beware of large result sets that coudl eat all your memory !
Here's a simple approach using closures/anonymous subs stored in a hash by keyword name (compiles, but not tested otherwise), edited to include use of RaiseError:
# define cached SQL in hash, to access by keyword
#
sub genCachedSQL {
my $dbh = shift;
my $sqls = shift; # hashref for keyword => sql query
my %SQL_CACHE;
while (my($name,$sql) = each %$sqls) {
my $sth = $dbh->prepare($sql);
$SQL_CACHE{$name}->{sth} = $sth;
$SQL_CACHE{$name}->{exec} = sub { # closure for execute(s)
my #parameters = #_;
$SQL_CACHE{$name}->{sth}->execute(#parameters);
return sub { # closure for resultset iterator - check for undef
my $row; eval { $row = $SQL_CACHE{$name}->{sth}->fetchrow_arrayref(); };
return $row;
} # end resultset closure
} # end exec closure
} # end while each %$sqls
return \%SQL_CACHE;
} # end genCachedSQL
my $dbh = DBI->connect('dbi:...', { RaiseError => 1 });
# initialize cached SQL statements
#
my $sqlrun = genCachedSQL($dbh,
{'insert_table1' => qq{ INSERT INTO database.table1 (id, column) VALUES (?,?) },
'update_table1' => qq{ UPDATE database.table1 SET column = 'UPDATE!' WHERE id = ? },
'select_table1' => qq{ SELECT column FROM database.table1 WHERE id = ? }});
# use cached SQL
#
my $colid1 = 1;
$sqlrun->{'insert_table1'}->{exec}->($colid1,"ORIGINAL");
$sqlrun->{'update_table1'}->{exec}->($colid1);
my $result = $sqlrun->{'select_table1'}->{exec}->($colid1);
print join("\t", #$_),"\n" while(&$result());
my $colid2 = 2;
$sqlrun->{'insert_table1'}->{exec}->($colid2,"ORIGINAL");
# ...
I'm very impressed with bubaker's example of using a closure for this.
Just the same, if the original goal was to make the code-base smaller and more maintainable, I can't help thinking there's a lot of noise begging to be removed from the original code, before anyone embarks on a conversion to CDBI or DBIC etc (notwithstanding the great libraries they both are.)
If the $dbh had been instantiated with RaiseError set in the attributes, most of that code goes away:
$sql_update = qq { UPDATE database.table
SET column = 'UPDATE!'
WHERE id = ?
};
$sth_update = $dbh->prepare($sql_update);
$sth_update->execute($parameter);
I can't see that the error handling in the original code is adding much that you wouldn't get from the vanilla die produced by RaiseError, but if it's important, have a look at the HandleError attribute in the DBI manpage.
Furthermore, if such statements aren't being reused (which is often the main purpose of preparing them, to cache how they're optimised; the other reason is to mitigate against SQL injection by using placeholders), then why not use do?
$dbh->do($sql_update, \%attrs, #parameters);

Resources