I have a sequential dataset has the form of this
0000000520161103152815SHE0009 P1234561234567898765411112222 120AA
The last 2 bytes (position 71 and 72) are separate CH of either AA, AB, BA or blank. I'm trying to sort this input and create a report of sections AA, AB, BA and ignore the record that doesn't have AA, AB or BA. Each row of each section shows the teller name (SHE0009 above, position 23), and the payment (120 above, 11 bytes before AA, position 60). The final line of each section sum all the payments from that section.
Here's my code
//SHE0008 JOB
//SORTSTEP EXEC PGM=SORT
//SYSOUT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SORTIN DD DSN='SHECISC.ZEUSBANK.TXNOFFLD',DISP=SHR
//SORTOUT DD DSN=SHE0008.TESTT,
// DISP=(NEW,CATLG,DELETE),SPACE=(CYL,(10,5),RLSE)
//SYSIN DD *
SORT FIELDS=(71,1,CH,A,72,1,CH.A)
INREC BUILD=(71,1,72,1,23,8,60,11,BI,TO=ZD,LENGTH=11)
OUTFIL REMOVECC,
SECTIONS=(1,1,2,1
HEADER3=(1:C'PAYMENTS BY TELLER',/,X,/,
1:C'TRANSFER TYPE: ',1,1,2,1,/,X,/,
1:C'TELLER',10:C'PAYMENT',/,
1:C'------',10:C'-------'),
TRAILER3=(X,/,
1:C'BRANCH TOTAL: ',16:TOT=(11,11,BI,EDIT=(SIIIITTT),SIGNS=(,-)))),
TRAILER1=(X,/,1:C'GRAND TOTAL: ',TOT=(11,11,BI,
EDIT=(SIIIITTT),SIGNS=(,-))),
OUTREC=(1:7,4,CH,LENGTH=7,10:11,4,BI,EDIT=(SIIIITTT),
SIGNS=(,-))
/*
I'm getting error SORTIN - DATA SET SHECISC.ZEUSBANK.TXNOFFLD NOT FOUNDI
- STEP WAS NOT EXECUTED. Can anyone see why my dataset cannot be found and if possible is this the code that makes my desired result. Thanks.
The file name is SHECICS.ZEUSBANK.TXNOFFLD you wrote SHECISC.ZEUSBANK.TXNOFFLD.
You misspelt the first part "SHECICS" that could be the problem.
Try removing the quotes around the dataset name.
i.e. change
//SORTIN DD DSN='SHECISC.ZEUSBANK.TXNOFFLD',DISP=SHR
to
//SORTIN DD DSN=SHECISC.ZEUSBANK.TXNOFFLD,DISP=SHR
The reasoning being :-
If quotation marks delimit a data set name in a JCL DD statement, JCL
processing cannot perform syntax checking on the statement, and SMS
rejects the input based on its parsing of the data set name. SMS does
not allow the name to be catalogued because quoted data sets cannot be
SMS managed.
SMS being System Managed Storage, although I believe that the result would have been the same in pre-SMS times. If I recall correctly I also had the odd tape created with DSN=' ', (a number of spaces) which would fool quite a few people if they tried to read the tape i.e. quotes allowed you to use con-conformant dataset names.
The following my be of interest:-
Data Set Names
Character sets - Table 2. Special Characters Used in Syntax
I am having trouble understanding the values that I have saved in my Round Robin Database. I do a dump with rrdtool dump mydatabase and I get a dump of the data. I found the most recent update, and matched it to my rrd update command:
$rrdupdate --template=var1:var2:var3:var4:var5 N:15834740:839964:247212:156320:13493356
In my dump at the matching timestamp, I find these values:
<!-- 2016-12-01 10:30:00 CST / 1480609800 --> <row><v>9.0950245287e+04</v><v>4.8264158237e+03</v><v>1.4182428703e+03</v><v>8.9785764359e+02</v><v>7.7501969607e+04</v></row>
The first value is supposed to be var1. Out of scientific notation, that's 90,950.245287, which does not match up at all to my input value. (None of them are decimal.)
Is there something special I have to do to be able to convert values from my dump to get the standard value that I entered?
I can't give you specifics for your case, as you have not shown the full definition of your RRD file (internals, DS definition, etc), however...
Values stored in an RRDTool database are subject to Data Normalisation, and are then converted to Rates (unless the DS is of type Gauge in which case they are assumed to be rates already).
Normalisation is when the values are adjusted on a linear basis to make them fit exactly into the time sequence as defined by the Interval (which is often 300 seconds).
If you want to see the values stored exactly as you write them, you need to set the DS type to 'gauge', and make Normalisation a null step. The only way to do the latter is to store the values exactly on a time boundary. So, if the Interval is 300s, then store at 12:00:00, 12:05:00, and so on - otherwise the values will be adjusted.
There is a lot more information about Normalisation - what it is, and why it is done - in Alex van den Bogaerdt's tutorial
I would need your help with and SQL query that has to remove duplicate entries from a table, mostly using the datestamp column as a criteria in two passes.
Microsoft SQL DBMS is in question.
Here is a little more details:
Terminology: Module is basically a group of single machine workplaces onto which users operate.
Table:
ModNam column is fixed, there are 15 modules from M A01 to M A15, then goes the B row M B01 ... M B15 and so on until row F.
Pos column is irrelevant at the moment.
MdCod column represents a code of the machine being added to the position in the certain module. It can be replaced by another machine at any given time.
I have one query that will be inserting data into this table by copying entries from another table, every time a new machine is added to one of the positions.
Tricky part for me is a second query that should be comparing records in two phases and if:
1) Inside same module (first pass of the query represented with red color in the example pic attached):
ModNam value is the same, MdCod matches between the entries then the most recent datestamp decides the single one to stay and others duplicates get deleted
2) Inside other module (second pass of the query represented with purple color in the example pic attached):
ModNam values are different and MdCod matches between the entries then the most recent datestamp decides the single one to stay and others duplicates get deleted.
Please help and advise.
Example pic (updated):
Thank you all in advance.
I am trying to copy all of the records from a data file (STUDMARKS) into my physical file (MARKS) using the CPYF command.
A R MARKSR TEXT('Marks Records')
A STUDENTID 9S 0 COLHDG('Student' 'ID')
A COURSE_CD 6A COLHDG('Course' 'Code')
A FINAL_MARK 3S COLHDG('Final' 'Mark')
A DATERUN L COLHDG('Date' 'Run')
A K STUDENTID
A K COURSE_CD
This is what I currently have in my MARKS.pf. The STUDMARKS.pf-dta file has the first three records already defined, the DATERUN record get filled with the date of use.
CPYF FROMFILE(IBC233LIB/STUDMARKS) TOFILE(DS233B32/MARKS) MBROPT(*REPLACE) FMTOPT(*MAP *DROP)
The above is the CPYF command that I ran after creating MARKS.pf, and after doing a RUNQRY to see all the records I've noticed that all but the COURSE_CD have been filled. COURSE_CD is completely blank.
I did some research before hand and did a DSPFFD on both members to ensure that the record lengths and types were all the same, which they were. I did notice, however, that in STUDMARKS.pf-dta that all the records had a buffer length which was equivalent to the field length. The STUDENTID field in MARKS.pf was the only one to not share this property, where the field length is 9, but the buffer length is only 5. I'm not sure if it's the reason why I'm having such difficulty, and the matter is almost certainly less so than what I'm making it out to be, but I've been at this for quite some time and a just can't seem to copy records from one member to another.
It's incredibly frustrating, and help would be greatly appreciated
I took screen shots of the DSPFFD commands for both files
For STUDMARKS
And For MARKS
EDIT
Just now seeing the spelling error! Smashing my head against the desk but I almost guarantee that is the problem. All of your answers were very informative and helpful though, so thank you very much
EDITEDIT
for others, despite the fact that I did change the names when recompiling the program, it will not work unless you delete the file first and THEN compile it. Very frustrating, but that's just how it is...
So DLTF [file name] and then recompile
As James noted, the differences in buffer length for STUDENTID are due to one file having it defined as packed and the other having it defined as zoned.
This won't matter to CPYF as both are a compatible numeric and CPYF will map between them as you've seen.
However, this proves that there's more than just a missing field different between the two files. Use DSPFFD and look at post the definitions of COURSE_ID from both files.
I'd bet either the names are different or the types are.
What you are experiencing is the difference between a packed and a signed decimal field.
More than likely you forgot to specify a datatype in position 35 of the DDS specification for the STUDENTID field in the MARKS file.
For example:
A STUDENTID 9S 0 COLHDG('Student' 'ID')
Data Field Buffer Buffer Field Column
Field Type Length Length Position Usage Heading
STUDENTID ZONED 9 0 9 1 Both Student
ID
A STUDENTID 9 0 COLHDG('Student' 'ID')
Data Field Buffer Buffer Field Column
Field Type Length Length Position Usage Heading
STUDENTID ZONED 9 0 5 1 Both Student
ID
A STUDENTID 9P 0 COLHDG('Student' 'ID')
Data Field Buffer Buffer Field Column
Field Type Length Length Position Usage Heading
STUDENTID PACKED 9 0 5 1 Both Student
ID
The explanation for this behaviour can be found in the DDS reference in the section Data type for physical and logical files (position 35):
For physical files, if you do not specify a data type or duplicate one from a referenced field, the operating system assigns the following defaults:
A (character) if the decimal positions 36 through 37 are blank.
P (packed decimal) if the decimal positions 36 through 37 contain a number in the range 0 through 63.
Because the data types are different the FMTOPT(*MAP *DROP) tells the CPYF command to silenty drop and default any non-matching fields.
The odd thing is the file field description identifies the field as ZONED when it is really PACKED.
The *DROP value for the FMTOPT parameter excludes like named fields that do not have the same attribute and relative position in both files. The COURSE_CD field has a different position in the receiving file.
I have SQL Server 2012 and I want to know what's the usage of sequence. I Look for a sample to explain usage of sequence.
EDIT
I know create and use Sequence in database. I want to know what is practical scenario for use of Sequence.
CREATE SEQUENCE dbo.OrderIDs
AS INT
MINVALUE 1
NO MAXVALUE
START WITH 1;
SELECT NextOrderID = NEXT VALUE FOR dbo.OrderIDs
UNION ALL SELECT NEXT VALUE FOR dbo.OrderIDs
UNION ALL SELECT NEXT VALUE FOR dbo.OrderIDs;
Results:
NextOrderID
-----------
1
2
3
See here for original source and more examples. The page refers to SQL Server Denali which is the beta of SQL 2012 but the syntax is still the same.
One of the ways I leverage the SEQUENCE command is for reference numbers in an ASP/C# detailsview page (as an example). I use the detailsview to enter requests into a database and the SEQUENCE command serves as the request/ticket number for each request. I set the inital sequence command to start with a specific number and increment by 1 for each request.
If I present these requests in a gridview I make the SEQUENCE reference numbers appear but don't make them editable. Its great for a reference number when records are similar with other fields in the database. It's also perfect for customers when they have questions about a specific entry in a given database. This way I have a unique number per entry no matter if the rest of the information is identical or not.
Here's how I generally leverage the SEQUENCE command:
CREATE SEQUENCE blah.deblah
START WITH 1
INCREMENT BY 1
NO CYCLE
NO CACHE
In short, I start my sequence at #1 (you can choose any number you want to start with) and it counts upwards in increments of 1. I don't cycle the sequence numbers when they reach the system max number.