How can I change the circled value? (5CB4F8B3h)
I am not a professional, but it is vital for me to influence this value.
Treat it with understanding :)
Using a hex editor, locate the sector containing the hex value you need to modify, and change it. TimeDateStamp is in epoch time, so you should get the timestamp in epoch time and convert it to hexadecimal (you may use a calculator in "programming mode", type the decimal timestamp and you will get its hex value.
After some googling, I found that a tool called PE Explorer can modify the timestamp of PE executables, which is the thing you're attempting to do, so this could be a better solution for a no professional. The instructions are here. I'm in no way affiliated to this product, I just thought it may be useful for your needs.
Modify these values in IDA with idc function PatchDword:
success PatchDword(long ea,long value);
This function changes a dword (4 bytes) at address ea to value.
You want to change 3 values at 0x14010D484, 0x14010D4A0 and 0x14010D4BC.
In IDA run "File->IDC command..." and enter this script with newValue1, newValue2 and newValue3 set to new values for these addresses:
if( !PatchDword(0x14010D484, newValue1) || !PatchDword(0x14010D4A0, newValue2) || !PatchDword(0x14010D4BC, newValue3))
Message("Failed to patch!\n");
Click OK and check the output window - there should be no error messages.
Generate DIF file - "File->Produce File->Create DIF file...". The DIF file is a text file with a list of patched bytes. Each line looks like:
0005A827: 5D BB
In this example 0005A827 is an offset in the binary on the disk, 5D is the original value, BB is the new value.
Apply this DIF file with some patch applyer or do it manually in a hex editor - move to the offsets from the DIF file and change the values of the bytes.
Your binary can use checksum checking. In this case it will not run after the patch. The checksum fix depends on the checksum type used.
Related
I have a Hex file for STM32F427 that was built using GCC(gcc-arm-none-eabi) version 4.6 that had contiguous memory addresses. I wrote boot loader for loading that hex file and also added checksum capability to make sure Hex file is correct before starting the application.
Snippet of Hex file:
:1005C80018460AF02FFE07F5A64202F1D00207F5F9
:1005D8008E4303F1A803104640F6C821C2F2000179
:1005E8001A460BF053F907F5A64303F1D003184652
:1005F8000BF068F907F5A64303F1E80340F6FC1091
:10060800C2F2000019463BF087FF07F5A64303F145
:10061800E80318464FF47A710EF092FC07F5A643EA
:1006280003F1E80318460EF03DFC034607F5A64221
:1006380002F1E0021046194601F0F2FC07F56A5390
As you can see all the addresses are sequential. Then we changed the compiler to version 4.8 and i got the same type of Hex file.
But now we used compiler version 6.2 and the Hex file generated is not contiguous. It is somewhat like this:
:10016000B9BC0C08B9BC0C08B9BC0C08B9BC0C086B
:10017000B9BC0C08B9BC0C08B9BC0C08B9BC0C085B
:08018000B9BC0C08B9BC0C0865
:1001900081F0004102E000BF83F0004330B54FEA38
:1001A00041044FEA430594EA050F08BF90EA020FA5
As you can see after 0188 it is starting at 0190 means rest of 8 bytes(0189 to 018F) are 0xFF as they are not flashed.
Now boot loader is kind of dumb where we just pass the starting address and no of bytes to calculate the checksum.
Is there a way to make hex file in contiguous way as compiler 4.6 and compiler 4.8? the code is same in all the three times.
If post-processing the hex file is an option, you can consider using the IntelHex python library. This lets you manipulate hex file data (i.e. ignoring the 'markup'; record type, address, checksum etc) rather than as lines, will for instance create output with the correct line checksum.
A fast way to get this up and running could be to use the bundled convenience scripts hex2bin.py and bin2hex.py:
python hex2bin.py --pad=FF noncontiguous.hex tmp.bin
python bin2hex.py tmp.bin contiguous.hex
The first line converts the input file noncontiguous.hex to a binary file, padding it with FF where there is no data. The second line converts it the binary file back to a hex file.
The result would be
:08018000B9BC0C08B9BC0C0865
becomes
:10018000B9BC0C08B9BC0C08FFFFFFFFFFFFFFFF65
As you can see, padding bytes are added where the input doesn't have any data, equivalent to writing the input file to the device and reading it back out. Bytes that are in the input file are kept the same - and at the same address.
The checksum is also correct as changing the length byte from 0x08 to 0x10 compensates for the extra 0xFF bytes. If you padded with something else, IntelHex would output the correct checksum
You can skip the the creation of a temporary file by piping these: omit tmp.bin in the first line and replacing it with - in the second line:
python hex2bin.py --pad=FF noncontiguous.hex | python bin2hex.py - contiguous.hex
An alternative way could be to have a base file with all FF and use the hexmerge.py convenience script to merge gcc's output onto it with --overlap=replace
The longer, more flexible way, would be to implement your own tool using the IntelHex API. I've used this to good effect in situations similar to yours - tweak hex files to satisfy tools that are costly to change, but only handle hex files the way they were when the tool was written.
One of many possible ways:
Make your hex file with v6.2, e.g., foo.hex.
Postprocess it with this Perl oneliner:
perl -pe 'if(m/^:(..)(.*)$/) { my $rest=16-hex($1); $_ = ":10" . $2 . ("FF" x $rest) . "\n"; }' foo.hex > foo2.hex
Now foo2.hex will have all 16-byte lines
Note: all this does is FF-pad to 0x10 bytes. It doesn't check addresses or anything else.
Explanation
perl -pe '<some script>' <input file> runs <some script> for each line of <input file>, and prints the result. The script is:
if(m/^:(..)(.*)$/) { # grab the existing byte count into $1
my $rest=16 - hex($1); # how many bytes of 0xFF we need
$_ = ":10" . $2 . ("FF" x $rest) . "\n"; # make the new 16-byte line
# existing bytes-^^ ^^^^^^^^^^^^^^-pad bytes
}
Another solution is to change the linker script to ensure the preceding .isr_vector section ends on a 16 byte alignment, as the mapfile reveals that the following .text section is 16 byte aligned.
This will ensure there is no unprogrammed flash bytes between the two sections
You can use bincopy to fill all empty space with 0xff.
$ pip install bincopy
$ bincopy fill foo.hex
Use the -gap-fill option of objcopy, e.g.:
arm-none-eabi-objcopy --gap-fill 0xFF -O ihex firmware.elf firmware.hex
I am developing on an embedded system (STM32F4) and I tried to send some data to a simple Windows Forms client program on the PC side. When I used a character based string format everything was working fine but when I changed to a binary package to increase performance I run into an problem with Escape characters.
I'm using nanopb to implement Googles Protocol Buffer for transmission and I observed that in 5% of package I'm receiving exceptions in my client program telling me that my packages are corrupted.
I debugged in WireShark and saw that in this corrupted package the size was 2-4 bytes smaller than the original package size. Upon further inspecting I found out that the corrupted packages always included the binary value 27 and other packages never included this value. I searched for it and saw that this value represents an escape character and that this might lead to problems.
The technical document of the Wi-Fi module I'm using (Gainspan GSM2100) mentions that commands are preceded by an escape character so I think I need to get rid of this values in my package.
I couldn't find a solution to my problem so I would appreciate if somebody more experienced could led me to the right approach to solve this problem.
How are you sending the data? Are you using a library or sending raw bytes? According to the manual, your data commands should start with an escape sequence, but also have data length specified:
// Each escape sequence starts with the ASCII character 27 (0x1B),
// the equivalent to the ESC key. The contents of < > are a byte or byte stream.
// - Cid is connection id (udp, tcp, etc)
// - Data Length is 4 ASCII char represents decimal value
// i.e. 1400 bytes would be '1' '4' '0' '0' (0x31 0x34 0x30 0x30).
// - Data size must match with specified length.
// Ignore all command or esc sequence in between data pay load.
<Esc>Z<Cid><Data Length xxxx 4 ascii char><data>
Note the remark regarding data size: "Ignore all command or esc sequence in between data pay load".
For example, this is how the GSCore::writeData function in GSCore.cpp looks like:
// Including a trailing 0 that snprintf insists to write
uint8_t header[8];
// Prepare header: <esc> Z <cid> <ascii length>
snprintf((char*)header, sizeof(header), "\x1bZ%x%04d", cid, len);
// First, write the escape sequence up to the cid. After this, the
// module responds with <ESC>O or <ESC>F.
writeRaw(header, 3);
if (!readDataResponse()) {
if (GS_LOG_ERRORS && this->error)
this->error->println("Sending bulk data frame failed");
return false;
}
// Then, write the rest of the escape sequence (-1 to not write the
// trailing 0)
writeRaw(header + 3, sizeof(header) - 1 - 3);+
// And write the actual data
writeRaw(buf, len);
This should most likely work. Alternatively, a dirty hack might be to "escape the escape character" before sending, i.e. replace each 0x27 with two characters (0x27 0x27) before sending - but this is just a wild guess and I am presuming you should just check the manual.
I know there are multiple questions regarding this subject, but they did not help.
When trying to compile, whatever, I keep getting the same error:
arm-none-eabi-gcc.exe: error: CreateProcess: No such file or directory
I guess it means that it can not find the compiler.
I have tried tweaking the path settings
C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\nxp\LPCXpresso_7.6.2
326\lpcxpresso\tools\bin;
Seems to be right?
I have tried using Sysinternals process monitor
I can see that a lot of arm-none-eabi-gcc.exe are getting a result of name not found but there are a lot of successful results too.
I have also tried reinstalling the compiler and the LPCXpresso, no luck.
If i type arm-none-eabi-gcc -v i get the version, so it means its working
but when i am trying to compile in CMD like this arm-none-eabi-gcc led.c
i get the same error as stated above
arm-none-eabi-gcc.exe: error: CreateProcess: No such file or directory
Tried playing around more with PATH in enviroments, no luck. I feel like something is stopping LPCXpresso from finding the compiler
The only Antivirus this computer has is Avira and i disabled it. I also allowed compiler and LPCXpresso through the firewall
I have tried some more things, i will add it shortly after trying to duplicate the test.
It seems your problem is a happy mess with Vista and GCC. Long story short, a CRT function, access, has a different behavior on Windows and Linux. This difference is actually mentioned on Microsoft documentation, but the GCC folks didn't notice. This leads to a bug on Vista because this version of Windows is more strict on this point.
This bug is mentioned here : https://gcc.gnu.org/bugzilla/show_bug.cgi?id=33281
I have no proof that your problem comes from here, but the chances are good.
The solutions :
Do not use Vista
Recompile arm-none-eabi-gcc.exe with the flag -D__USE_MINGW_ACCESS
Patch arm-none-eabi-gcc.exe
The 3rd is the easiest, but it's a bit tricky. The goal is to hijack the access function and add an instruction to prevent undesired behavior. To patch your gcc, you have two solutions : you upload your .exe and I patch it for you, or I give you the instructions to patch it yourself. (I can also patch it for you, then give the instructions if it works). The patching isn't really hard and doesn't require advanced knowledge, but you must be rigorous.
As I said, I don't have this problem myself, so I don't know if my solution really works. The patch seems to be working for this problem.
EDIT2:
The exact problem is that the linux access have a parameter flag to check whether a file is executable. The Windows access cannot check for this. The behavior of most Windows versions is just to ignore this flag, and check if the file exists instead, which will usually give the same behavior. The problem is that Vista doesn't ignore this, and whenever the access is used to check for executability, it will return an error. This lead to GCC programs to think that some executables are not here. The patch induced by -D__USE_MINGW_ACCESS, or done manually, is to delete the flag when access is called, thus checking for existence instead just like other Windows versions.
EDIT:
The patching is actually needed for every GCC program that invokes other executables, and not only gcc.exe. So far there is only gcc.exe and collect2.exe.
Here are the patching instruction :
Backup your arm-none-eabi-gcc.exe.
Download and install CFF Explorer (direct link here).
Open arm-none-eabi-gcc.exe with CFF Explorer.
On the left panel, click on Import Directory.
In the module list that appears, click on the msvcrt.dll line.
In the import list that appears, find _access. Be careful here, the list is long, and there are multiple _access entries. The last one (the very last entry for me) is probably the good one.
When you click on the _access line, an address should appear on the list header, in the 2nd column 2nd row, just below FTs(IAT). Write down that address on notepad (for me, it is 00180948, it may be different). I will refer to this address as F.
On the left panel, click on Address Converter.
Three fields should appear, enter address F in the File Offset field.
Write down on notepad a 6 bytes value : the first two bytes are FF 25, the last 4 are the address that appeared in the VA field, IN REVERSE. For example, if 00586548 appeared in the VA field, write down FF 25 48 65 58 00 (spaces added for legibility). I will refer to this value as J. This value J is the instruction that jumps to the _access function.
On the left panel, click on Section Headers.
In the section list that appeared on the right, click on the .text line (the .text section is where the code resides).
In the editor panel that appeared below, click on the magnifier and, in the Hex search bar, search for a series of 11 90 (9090909090..., 90 is NOP in assembly). This is to find a code cave (unused space) to insert the patch, which is 11 bytes long. Once you found a code cave, write down the offset of the first 90. The exact offset is displayed on the very bottom as Pos : xxxxxxxx. I will refer to this offset as C.
Use the editor to change the sequence of 11 90 : the first 5 bytes are 80 64 E4 08 06. These 5 bytes are the instruction that prevents the wrong behavior. The last 6 bytes are the value J (edit the next 6 bytes to J, ex. FF 25 48 65 58 00), to jump back to the _access function.
Click on the arrow icon (Go To Offset) a bit below, and enter 0, to navigate to the beginning of the file.
Use the Hex search bar again to search for value J. If you find the bytes you just modified, skip. The J value you need is located around many value containing FF 25 and 90 90. That is the DLL jump table. Write down the offset of the value J you found (offset of the first byte, FF). I will refer to this offset as S. Note 1: If you can't find the value, maybe you picked the wrong _access in step 6, or did something wrong between step 6 to 10. Note 2: The search bar doesn't loop when it hit the end; go to offset 0 manually to re-find.
Use a hexadecimal 32-bit 2-complement calculator (like this one : calc.penjee.com) to calculate C - S - 5. If your offset C is 8C0 and your offset S is 6D810, you must obtain FF F9 30 AB (8C0 minus 6D810, minus 5).
Replace the value J you found in the file (at step 16) by 5 bytes : the first byte is E9, the last 4 are the result of the last operation, IN REVERSE. If you obtained FF F9 30 AB, you must replace the value J (ex: FF 25 48 65 58 00) by E9 AB 30 F9 FF. The 6th byte of J can be left untouched. These 5 bytes are the jump to the patch.
File -> Save
Notes : You should have modified a total of 16 bytes. If the patched program crash, you did something wrong. Even if it doesn't work, this patch can't induce a crash.
Let me know if you have difficulties somewhere.
I am working on a program that outputs PDF documents. Given a sequence of UTF-8 encoded characters and the name of a font that shall be used to render it, I would like to show the appropriate glyphs that make the actual content of the document. I would like to be able to display national characters such as č or ö. It would be great to support ligatures like ae or ffi.
The problem is, I do not know how the actual glyphs to be shown are specified (inside a content stream, for example).
If I, for example, want to display the string "Hello World", I need not to worry about encoding, I simply write (Hello World)Tj. The PDF reader will then use the appropriate font to render this string.
But what if I wanted to show the string
It is difficult to read the PDF specification all day. Prostě dočista nemožné!
with the ligatures ffi, fi and ea and the Czech national symbols ě, č and é in a given font, how would I proceed?
I am trying to get through the PDF specification, but it is not easy.
How do I find out the "code of the glyph" that corresponds to a given character or ligature?
How is this code encoded within a PDF content stream?
Help is much appreciated.
Edit: I may have overestimated the problem. Counting the glyphs that are needed to display a "common European document", I cannot think of a way how this number could exceed 256. If my assumptions are correct, I can remap the encoding of the font completely. This should be sufficient to cover all common symbols of the latin alphabet, numbers, punctuation, common symbols like ( and [ and still I would have plenty of room for national symbols, ligatures and other elements of high-quality typography. (I can implement a priority queue to select the most used ligatures if the total number of glyphs shall exceed 256.)
That being said, I do not think I need to use the CID-keyed fonts.
Still I wander how do I map UTF-8 encoded characters onto glyphs of an arbitrary font. I have the AFM of the font available. For the DejaVu font, for example, character information go like this:
C 63 ; WX 536 ; N question ; B 67 -15 488 743 ;
C 64 ; WX 1000 ; N at ; B 65 -174 930 705 ;
C 65 ; WX 722 ; N A ; B -6 0 732 730 ;
But after the 256th character is mapped, the codes are -1:
C 255 ; WX 564 ; N ydieresis ; B -3 -223 563 767 ;
C -1 ; WX 722 ; N Amacron ; B -6 0 732 899 ;
C -1 ; WX 596 ; N amacron ; B 49 -15 568 746 ;
For example, if I had the sequence 11100010 10000010 10101100 (Euro sign) in my input, how would I know what glyph name it corresponds to so that I can map it in the /Encoding dictionary?
Encoding varies based on the font type. Typically, there is a font resource that is defined as the current font and within that font dictionary is a reference to a base font and a means of describing the encoding (via the /Encoding key). If that key doesn't exist, the encoding will be "standard", but you can use other simple encodings such as /MacRoman and /WinAnsi for the value of the encoding, or you can specify a standard encoding and an encoding delta to show the differences.
Easy so far - as long as you're working with 8-bit characters. For many early apps, they would create a couple different fonts, one with say Roman encoding and another that maps roman characters to unavailable characters. In order to do that, your encoding delta would include references to the ligatures and other typically non-encoded symbols. This works great for Type 1 fonts, but is specifically contraindicated by the spec in the section on TrueType Fonts:
A nonsymbolic font should specify MacRomanEncoding or WinAnsiEncoding as the value of its Encoding entry, with no Differences array
This is vastly different when you want to use, say, Unicode. In which case you would be using a CID font (a font based on character IDs). In that case there is a procedure referenced by the font which is used to map from a character encoding in your string to a character ID in your font (and vice versa). I would strongly recommend that you read and fully understand section 9.7 in the PDF specification on Composite Fonts, which describes everything you need in order to encode UTF16BE into strings to get them to render properly in PDF. It is decidedly non-trivial in that there are a lot of details that if missed will result in a blank rendered page in Acrobat.
As a software engineer who professionally writes code that produces and consumes PDF, let me state that when I get tasked with having to put in special cases in my code to deal with non-spec compliant PDF, a little piece of me dies inside. Please, please, don't even think of releasing any documents you produce into the wild until they pass Preflight at the least. This is not the same as "Acrobat renders it so it must be OK." Let me give you an example - I've seen a number of files in the wild that include fonts that are missing the key elements of the FontDescriptor dictionary, including /Ascent, /Descent, /CapHeight, etc. These render in Acrobat, but are in violation of the spec since each of those is required. I know how Acrobat handles that - it comes with an enormous database of font metrics and looks up the value if it can't find it in the file (heck, it might even ignore the metrics in the file). I don't have that luxury, so I have to do a number of (potentially expensive/invalid) stop gap measures.
You might want to consider using a library to do this work for you - maybe iText which has a decent enough licensing scheme for education because, I get it, you're a student. There are some C based libraries too. Maybe you can figure a way to make GhostScript do your bidding.
If you are unwilling or unable to follow my advice with regards to cleaving to the specification or to use a library which ostensibly does so, please do me the favor of at least filling out the /Creator and /Producer strings in the Document Information Dictionary referenced by the trailer (see sections 14.3.3 and section 7.5.5). That way, when I have to parse/consume/manipulate your documents, I will have a way to directly cast aspersions on your parentage.
Let's go top down and start with the page object - I'm using output from my own library and am stripping out what I think you don't need:
1 0 obj <<
/Type /Page
/Parent 18 0 R
/Resources <<
/Font <<
/U0 13 0 R
>>
/ProcSet [ /PDF /Text ]
>>
/MediaBox [ 0 0 612 792 ]
/Contents 19 0 R
/Dur -1
>>
endobj
U0 is a reference to a font that will be used for unicode text.
The content stream is intended to print the following text: Greek: Γειά σου κόσμος.
BT /U0 24 Tf 72 670 Td
(\000G\000r\000e\000e\000k\000:\000 \003\223\003\265\003\271\003\254\000 \003\303\003\277\003\305\000 \003\272\003\314\003\303\003\274\003\277\003\302)
Tj ET
The font dictionary referenced looks like this:
13 0 obj <<
/BaseFont /DejaVuSansCondensed
/DescendantFonts [ 4 0 R ]
/ToUnicode 14 0 R
/Type /Font
/Subtype /Type0
/Encoding /Identity-H
>>
endobj
Which has the /ToUnicode entry points to a stream containing the following PostScript code:
/CIDInit /ProcSet findresource begin 12 dict begin begincmap /CIDSystemInfo << /Registry (Adobe) /Ordering (UCS) /Supplement 0 >> def /CMapName /Adobe-Identity-UCS def /CMapType 2 def 1 begincodespacerange <0000> <FFFF> endcodespacerange 1 beginbfrange <0000> <FFFF> <0000> endbfrange endcmap CMapName currentdict /CMap defineresource pop end end
which is defined by the CID font specification.
and the DescendantFonts array points to this object:
4 0 obj <<
/Subtype /CIDFontType2
/Type /Font
/BaseFont /DejaVuSansCondensed
/CIDSystemInfo 7 0 R
/FontDescriptor 8 0 R
/DW 1000
/W 9 0 R
/CIDToGIDMap 10 0 R
>>
The CIDToGIDMap is a compressed stream with the actual map, the CIDSystemInfo is <</Registry (Adobe) /Ordering (USC) /Supplement 0>> (it's a reference because I share it among all unicode fonts that I output. The FontDescriptor is a straight forward boiler plate, and the W array is derived from the font metrics.
With all this detail, are you understanding why I don't say lightly, "walk away before you pollute my environment any furhter"?
I'm really beginning to question the nature of the this assignment. Writing a simple PDF is one thing, but writing code that can handle full unicode in any arbitrary OpenType/TrueType font requires you to understand the CID spec and the TrueType spec (hint: I have a full TrueType parser that can extract all the metrics for any glyph in a font so that I can output the /W array).
If, however, you are required to only output to Type 1 fonts, well my friend, your life got a whole lot easier, because you would take your entire UTF8 stream, read it as unicode and for every unique character that comes in, you build a map from a unicode character to a glyph name and an internal character number by using this table. The internal character number essentially the unique index of the character that came in mod. So for example, if you have less than 257 unique characters on the page, you will have exactly one font that is encoded to map to the characters in the order that the arrived. If you had "abcba" for input, the output string in pdf would be (\000\001\002\001\000) and would map to a font with an encoding dictionary with a differences array that would be [0/a/b/c]. If you have n unique characters where n > 256, you're going to have (n / 256) + 1 fonts, each with encodings.
If your teacher/professor wants anything but Type 1 fonts in a short period of time, s/he has unrealistic expectations for the students and/or low expectations for the quality of output. You should ask whether your are required to handle CID fonts and if you are, then your professor is at the very least a sadist. It took me, a seasoned professional, about 4 days to write a TrueType parser for extracting widths. I had the advantage of (1) using a managed language (C#) which cut down on concerns that will be biting your ass in C and was also able to use reflection to automate parsing and (2) when I don't have interruptions, I write solid code about 10-20 times faster than a typical student, so my 32 hours would translate into 320 student hours, more or less (then again, my code has different constraints than yours - it has to consume any crap font it gets gracefully), so let's call it 200 or less if you're allowed to steal something like stb. That's just for getting one particular element in the font descriptor.
I have the following call statement :
038060 CALL PROG USING
038070 DFH
038080 L000
038090 ZONE-E
038100 ZONE-S.
This call is dynamic and use PROG.
PROG is a group defined as :
018630 01 XX00.
018640 10 PROG.
018650 15 XX00-S06 PICTURE X(6)
018660 VALUE SPACE.
018670 15 XX00-S02 PICTURE X(2)
018680 VALUE SPACE.
018690 10 XX00-S92 PICTURE 9(02)
018700 VALUE ZERO.
018710 10 XX00-S91 PICTURE 9(1)
018720 VALUE ZERO.
018730 10 XX00-S9Z PICTURE 9(1)
018740 VALUE ZERO.
018750 10 XX00-9B0 PICTURE X(05)
018760 VALUE SPACE.
018770 10 XX00-0B0 PICTURE X(02)
018780 VALUE SPACE.
018790 10 XX00-BB1 PICTURE X(01)
018800 VALUE SPACE.
018810 10 XX00-SFN PICTURE X(07)
I cut here but there is a lot of field after...
It seems that actual progname to use is stored in :
XX00-S06
and
XX00-S02
I've also other cases where the name is on 3 or 4 fields, and the progname length is not always 8.
So my question is how Cobol know where to pick the good program name in the group? What are the resolution rules?
Configuration : I use Microfocus Net Express compiler and the environment is UniKix.
Dynamic call rules in COBOL are fairly simple. Given something like:
CALL WS-NAME USING...
COBOL will resolve the program name currently stored in WS-NAME against the load module libraries
available to it based on
a linear search. The first matching load module entry point name that matches WS-NAME is used.
It doesn't matter how complex, or simple, the definition of WS-NAME is. The total length used for the name
is whatever the length of WS-NAME is. For example:
01 WS-NAME.
05 WS-NAME-FIRST-PART PIC X(3).
05 WS-NAME-MIDDLE-PART PIC X(2).
05 WS-NAME-LAST-PART PIC X(3).
WS-NAME is composed of 3 subordinate fields giving a total of 8 characters. You can populate these individually or just move
something into WS-NAME as a whole. If the length of WS-NAME is less than 8 characters, the trailing characters will be
set to spaces on any receiving field. For example:
01 WS-SHORT-NAME.
05 WS-SHORT-NAME-FIRST-PART PIC X(4) VALUE 'AAAA'.
05 WS-SHORT-NAME-LAST-PART PIC X(2) VALUE 'BB'.
Here WS-SHORT-NAME is only 6 characters long. MOVING WS-SHORT-NAME to any longer PIC X type variable as in:
MOVE WS-SHORT-NAME TO WS-NAME
Will result in WS-NAME taking on the value 'AAAABBbb' (note the two trailing spaces). During libary search
for a matching entry point name, the trailing spaces are not significant so on the CALL statement you could use
either:
CALL WS-NAME
or
CALL-WS-SHORT-NAME
And they will resolve to the same entry point.
I am not sure what the length rules are for MicroFocus COBOL but, for IBM z/os dynamically called
program names cannot exceed 8 characters (if they do, the name is truncated to 8 characters).
I will add little more to NeilB with specific information about Micro Focus COBOL.
fyi: PROGRAM-ID, ENTRY-POINTS are restricted to 30-31 characters (check your "System Limits and Programming Restrictions" section in the docs).