in our project we have to use the GreenHillsCompiler Suite MULTI. So everything is configured and running. Reading the compiler manual I found an option for the linker which will generate a callgraph.
I added the option to the linker (elxr) in the makefile with
LINK_OPT += -callgraph
which generates a file with the extension ".graph" in the output folder. This files just contains plain text.
Function Function Call Call Count Percent of Total Max Displacement (bits)
#% BEGIN STATIC GRAPH
myFunc out 0 in 3
out 0 100%
in 3 100%
myFunc2 1 33% 0 2514 0
.static00012204 1 33% 0 514 0
.static0001220b 1 33% 0 1300 0
#% END STATIC GRAPH
So the question is: What tool has to be used further?
What we want is an image or a html-document.
That is the graph. It is in text format.
Note that if you want a graphical representation of a function's call graph, from within the MULTI debugger you can right-click a function and select Browse Other -> Browse Static Calls.
Related
If I use
/usr/bin/time -f"%e,%P,%M,%I,%O"
I get (for the last three placeholders) the memory the process used, and if there was some input and output during it.
Obviously, it's easy to get %e or something like it using sys/time.h, but is there a way to get %M, %I and %O programmatically?
You could read and parse the files in the /proc filesystem. /proc/self refers to the process accessing the /proc filesystem.
/proc/self/statm contains information about memory usage, measured in pages. Sample output:
% cat /proc/self/statm
1115 82 63 12 0 79 0
Fields are size resident share text lib data dt; see the proc manual page for some additional details.
/proc/self/io contains the I/O for the current process. Sample output:
% cat /proc/self/io
rchar: 2012
wchar: 0
syscr: 6
syscw: 0
read_bytes: 0
write_bytes: 0
cancelled_write_bytes: 0
Unfortunately, io isn't documented in the proc manual page (at least on my Debian system). I had too check the iotop source code to see how it obtained the per process I/O information.
I am working on a program that outputs PDF documents. Given a sequence of UTF-8 encoded characters and the name of a font that shall be used to render it, I would like to show the appropriate glyphs that make the actual content of the document. I would like to be able to display national characters such as č or ö. It would be great to support ligatures like ae or ffi.
The problem is, I do not know how the actual glyphs to be shown are specified (inside a content stream, for example).
If I, for example, want to display the string "Hello World", I need not to worry about encoding, I simply write (Hello World)Tj. The PDF reader will then use the appropriate font to render this string.
But what if I wanted to show the string
It is difficult to read the PDF specification all day. Prostě dočista nemožné!
with the ligatures ffi, fi and ea and the Czech national symbols ě, č and é in a given font, how would I proceed?
I am trying to get through the PDF specification, but it is not easy.
How do I find out the "code of the glyph" that corresponds to a given character or ligature?
How is this code encoded within a PDF content stream?
Help is much appreciated.
Edit: I may have overestimated the problem. Counting the glyphs that are needed to display a "common European document", I cannot think of a way how this number could exceed 256. If my assumptions are correct, I can remap the encoding of the font completely. This should be sufficient to cover all common symbols of the latin alphabet, numbers, punctuation, common symbols like ( and [ and still I would have plenty of room for national symbols, ligatures and other elements of high-quality typography. (I can implement a priority queue to select the most used ligatures if the total number of glyphs shall exceed 256.)
That being said, I do not think I need to use the CID-keyed fonts.
Still I wander how do I map UTF-8 encoded characters onto glyphs of an arbitrary font. I have the AFM of the font available. For the DejaVu font, for example, character information go like this:
C 63 ; WX 536 ; N question ; B 67 -15 488 743 ;
C 64 ; WX 1000 ; N at ; B 65 -174 930 705 ;
C 65 ; WX 722 ; N A ; B -6 0 732 730 ;
But after the 256th character is mapped, the codes are -1:
C 255 ; WX 564 ; N ydieresis ; B -3 -223 563 767 ;
C -1 ; WX 722 ; N Amacron ; B -6 0 732 899 ;
C -1 ; WX 596 ; N amacron ; B 49 -15 568 746 ;
For example, if I had the sequence 11100010 10000010 10101100 (Euro sign) in my input, how would I know what glyph name it corresponds to so that I can map it in the /Encoding dictionary?
Encoding varies based on the font type. Typically, there is a font resource that is defined as the current font and within that font dictionary is a reference to a base font and a means of describing the encoding (via the /Encoding key). If that key doesn't exist, the encoding will be "standard", but you can use other simple encodings such as /MacRoman and /WinAnsi for the value of the encoding, or you can specify a standard encoding and an encoding delta to show the differences.
Easy so far - as long as you're working with 8-bit characters. For many early apps, they would create a couple different fonts, one with say Roman encoding and another that maps roman characters to unavailable characters. In order to do that, your encoding delta would include references to the ligatures and other typically non-encoded symbols. This works great for Type 1 fonts, but is specifically contraindicated by the spec in the section on TrueType Fonts:
A nonsymbolic font should specify MacRomanEncoding or WinAnsiEncoding as the value of its Encoding entry, with no Differences array
This is vastly different when you want to use, say, Unicode. In which case you would be using a CID font (a font based on character IDs). In that case there is a procedure referenced by the font which is used to map from a character encoding in your string to a character ID in your font (and vice versa). I would strongly recommend that you read and fully understand section 9.7 in the PDF specification on Composite Fonts, which describes everything you need in order to encode UTF16BE into strings to get them to render properly in PDF. It is decidedly non-trivial in that there are a lot of details that if missed will result in a blank rendered page in Acrobat.
As a software engineer who professionally writes code that produces and consumes PDF, let me state that when I get tasked with having to put in special cases in my code to deal with non-spec compliant PDF, a little piece of me dies inside. Please, please, don't even think of releasing any documents you produce into the wild until they pass Preflight at the least. This is not the same as "Acrobat renders it so it must be OK." Let me give you an example - I've seen a number of files in the wild that include fonts that are missing the key elements of the FontDescriptor dictionary, including /Ascent, /Descent, /CapHeight, etc. These render in Acrobat, but are in violation of the spec since each of those is required. I know how Acrobat handles that - it comes with an enormous database of font metrics and looks up the value if it can't find it in the file (heck, it might even ignore the metrics in the file). I don't have that luxury, so I have to do a number of (potentially expensive/invalid) stop gap measures.
You might want to consider using a library to do this work for you - maybe iText which has a decent enough licensing scheme for education because, I get it, you're a student. There are some C based libraries too. Maybe you can figure a way to make GhostScript do your bidding.
If you are unwilling or unable to follow my advice with regards to cleaving to the specification or to use a library which ostensibly does so, please do me the favor of at least filling out the /Creator and /Producer strings in the Document Information Dictionary referenced by the trailer (see sections 14.3.3 and section 7.5.5). That way, when I have to parse/consume/manipulate your documents, I will have a way to directly cast aspersions on your parentage.
Let's go top down and start with the page object - I'm using output from my own library and am stripping out what I think you don't need:
1 0 obj <<
/Type /Page
/Parent 18 0 R
/Resources <<
/Font <<
/U0 13 0 R
>>
/ProcSet [ /PDF /Text ]
>>
/MediaBox [ 0 0 612 792 ]
/Contents 19 0 R
/Dur -1
>>
endobj
U0 is a reference to a font that will be used for unicode text.
The content stream is intended to print the following text: Greek: Γειά σου κόσμος.
BT /U0 24 Tf 72 670 Td
(\000G\000r\000e\000e\000k\000:\000 \003\223\003\265\003\271\003\254\000 \003\303\003\277\003\305\000 \003\272\003\314\003\303\003\274\003\277\003\302)
Tj ET
The font dictionary referenced looks like this:
13 0 obj <<
/BaseFont /DejaVuSansCondensed
/DescendantFonts [ 4 0 R ]
/ToUnicode 14 0 R
/Type /Font
/Subtype /Type0
/Encoding /Identity-H
>>
endobj
Which has the /ToUnicode entry points to a stream containing the following PostScript code:
/CIDInit /ProcSet findresource begin 12 dict begin begincmap /CIDSystemInfo << /Registry (Adobe) /Ordering (UCS) /Supplement 0 >> def /CMapName /Adobe-Identity-UCS def /CMapType 2 def 1 begincodespacerange <0000> <FFFF> endcodespacerange 1 beginbfrange <0000> <FFFF> <0000> endbfrange endcmap CMapName currentdict /CMap defineresource pop end end
which is defined by the CID font specification.
and the DescendantFonts array points to this object:
4 0 obj <<
/Subtype /CIDFontType2
/Type /Font
/BaseFont /DejaVuSansCondensed
/CIDSystemInfo 7 0 R
/FontDescriptor 8 0 R
/DW 1000
/W 9 0 R
/CIDToGIDMap 10 0 R
>>
The CIDToGIDMap is a compressed stream with the actual map, the CIDSystemInfo is <</Registry (Adobe) /Ordering (USC) /Supplement 0>> (it's a reference because I share it among all unicode fonts that I output. The FontDescriptor is a straight forward boiler plate, and the W array is derived from the font metrics.
With all this detail, are you understanding why I don't say lightly, "walk away before you pollute my environment any furhter"?
I'm really beginning to question the nature of the this assignment. Writing a simple PDF is one thing, but writing code that can handle full unicode in any arbitrary OpenType/TrueType font requires you to understand the CID spec and the TrueType spec (hint: I have a full TrueType parser that can extract all the metrics for any glyph in a font so that I can output the /W array).
If, however, you are required to only output to Type 1 fonts, well my friend, your life got a whole lot easier, because you would take your entire UTF8 stream, read it as unicode and for every unique character that comes in, you build a map from a unicode character to a glyph name and an internal character number by using this table. The internal character number essentially the unique index of the character that came in mod. So for example, if you have less than 257 unique characters on the page, you will have exactly one font that is encoded to map to the characters in the order that the arrived. If you had "abcba" for input, the output string in pdf would be (\000\001\002\001\000) and would map to a font with an encoding dictionary with a differences array that would be [0/a/b/c]. If you have n unique characters where n > 256, you're going to have (n / 256) + 1 fonts, each with encodings.
If your teacher/professor wants anything but Type 1 fonts in a short period of time, s/he has unrealistic expectations for the students and/or low expectations for the quality of output. You should ask whether your are required to handle CID fonts and if you are, then your professor is at the very least a sadist. It took me, a seasoned professional, about 4 days to write a TrueType parser for extracting widths. I had the advantage of (1) using a managed language (C#) which cut down on concerns that will be biting your ass in C and was also able to use reflection to automate parsing and (2) when I don't have interruptions, I write solid code about 10-20 times faster than a typical student, so my 32 hours would translate into 320 student hours, more or less (then again, my code has different constraints than yours - it has to consume any crap font it gets gracefully), so let's call it 200 or less if you're allowed to steal something like stb. That's just for getting one particular element in the font descriptor.
I have to start by saying that I am very much a programming noob. I do not understand all the compiler options or nuances of the IDE, not by a longshot. But I am trying to teach myself more about native programming languages. (I'm decent with C#, but that is much easier than C as I am discovering.)
Today, I wrote this small program in C. It is a console/command line program. I used Visual Studio 2012 and my development machine alternates between Windows 7 and 8, 64 bit. To start, what I did was create a new VC++ project, and I chose a Blank Project. Then I created a new app.c file. I also created a *.rc file to give the executable some extra properties like "File Version" and "Company Name" when you browse the file properties in Windows Explorer. Then I went to the properties of the project, chose Configuration Properties -> C/C++ -> Code Generation and I changed Runtime Library to "Multi-threaded (/MT) so that I wouldn't have to distribute the msvcr100.dll file along with my executable.
In the app.c file, I placed the following code:
#include <stdio.h>
#include <string.h>
#include <Windows.h>
#include <WtsApi32.h>
#pragma comment(lib, "WtsApi32.lib")
void main(int argc, char *argv[])
{
char *helpMsg = "blah";
char *hostName, *connState = "";
char *addrFamily = "";
HANDLE hHost = NULL;
...stuff and so forth and so on...
}
Then I built/compiled the program, and the executable works just fine on Windows 7, 8, Server 2008R2, Server 2012, all 64 bit. But when I try to run the program on Server 2003 (and I am guessing WinXP, etc., as well,) I am greeted with the Windows dialog box:
"Foo.exe is not a valid Win32 application."
So my question is, is there something obvious/simple that I am missing that will allow this executable to also work on earlier XP/2003/32bit platforms that I am missing? I do not believe that I am using any 64-bit exclusive features in my program. But I figured that since I did choose "Blank Project" instead of "Win32 Console Application" that I may be missing some setting.
Edit: Here is the dumpbin.exe /headers output when run against my exe:
Microsoft (R) COFF/PE Dumper Version 11.00.50727.1
Copyright (C) Microsoft Corporation. All rights reserved.
Dump of file C:\users\me\Release\foo.exe
PE signature found
File Type: EXECUTABLE IMAGE
FILE HEADER VALUES
14C machine (x86)
5 number of sections
50F604BC time date stamp Tue Jan 15 19:39:08 2013
0 file pointer to symbol table
0 number of symbols
E0 size of optional header
102 characteristics
Executable
32 bit word machine
OPTIONAL HEADER VALUES
10B magic # (PE32)
11.00 linker version
7800 size of code
A200 size of initialized data
0 size of uninitialized data
16A7 entry point (004016A7) _mainCRTStartup
1000 base of code
9000 base of data
400000 image base (00400000 to 00414FFF)
1000 section alignment
200 file alignment
6.00 operating system version
0.00 image version
6.00 subsystem version
0 Win32 version
15000 size of image
400 size of headers
0 checksum
3 subsystem (Windows CUI)
8140 DLL characteristics
Dynamic base
NX compatible
Terminal Server Aware
100000 size of stack reserve
1000 size of stack commit
100000 size of heap reserve
1000 size of heap commit
0 loader flags
10 number of directories
0 [ 0] RVA [size] of Export Directory
D374 [ 3C] RVA [size] of Import Directory
11000 [ 538] RVA [size] of Resource Directory
0 [ 0] RVA [size] of Exception Directory
0 [ 0] RVA [size] of Certificates Directory
12000 [ C04] RVA [size] of Base Relocation Directory
9160 [ 38] RVA [size] of Debug Directory
0 [ 0] RVA [size] of Architecture Directory
0 [ 0] RVA [size] of Global Pointer Directory
0 [ 0] RVA [size] of Thread Storage Directory
CF98 [ 40] RVA [size] of Load Configuration Directory
0 [ 0] RVA [size] of Bound Import Directory
9000 [ 118] RVA [size] of Import Address Table Directory
0 [ 0] RVA [size] of Delay Import Directory
0 [ 0] RVA [size] of COM Descriptor Directory
0 [ 0] RVA [size] of Reserved Directory
SECTION HEADER #1
.text name
7670 virtual size
1000 virtual address (00401000 to 0040866F)
7800 size of raw data
400 file pointer to raw data (00000400 to 00007BFF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
60000020 flags
Code
Execute Read
SECTION HEADER #2
.rdata name
49E2 virtual size
9000 virtual address (00409000 to 0040D9E1)
4A00 size of raw data
7C00 file pointer to raw data (00007C00 to 0000C5FF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
40000040 flags
Initialized Data
Read Only
Debug Directories
Time Type Size RVA Pointer
-------- ------ -------- -------- --------
50F604BC cv 61 0000CFE0 BBE0 Format: RSDS, {582D0FF2-59C1-4633-AF2A-E4A4AD6BFA2C}, 1, C:\Users\me\Release\users.pdb
50F604BC feat 10 0000D044 BC44 Counts: Pre-VC++ 11.00=0, C/C++=116, /GS=116, /sdl=0
SECTION HEADER #3
.data name
2C04 virtual size
E000 virtual address (0040E000 to 00410C03)
E00 size of raw data
C600 file pointer to raw data (0000C600 to 0000D3FF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
C0000040 flags
Initialized Data
Read Write
SECTION HEADER #4
.rsrc name
538 virtual size
11000 virtual address (00411000 to 00411537)
600 size of raw data
D400 file pointer to raw data (0000D400 to 0000D9FF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
40000040 flags
Initialized Data
Read Only
SECTION HEADER #5
.reloc name
235C virtual size
12000 virtual address (00412000 to 0041435B)
2400 size of raw data
DA00 file pointer to raw data (0000DA00 to 0000FDFF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
42000040 flags
Initialized Data
Discardable
Read Only
Summary
3000 .data
5000 .rdata
3000 .reloc
1000 .rsrc
8000 .text
I have also tried going to Project Properties -> Linker -> System: Minimum Required Version and changing that to 5.00 and 1.00 or whatever, but it has no effect. dumpbin.exe still reports the OS version as 6.00. I have even used editbin.exe /version 5.00 on the exe and no errors were reported... and yet dumpbin.exe still reports 6.00 for the OS version.
VS2012 originally shipped without supporting XP/2003. The updated CRT and runtime support libraries are using too many Windows api functions that are not available on those operating systems. This created quite a stir among its customers, to put it mildly, and they re-engineered the libraries to dynamically bind to these functions and limp along it they are missing. This was made available in Update 1, you'll need to use Project + Properties, General, Platform Toolset = v110_xp to build programs that use those libraries.
Note how it changes a linker setting, the important one, Linker > System > Minimum Required Version = "5.01". Which ensures that the executable file is marked to be compatible with the XP sub-system version. You'll also build against SDK version 7.1, the last one that is still compatible with XP.
When you use the default toolset (v110) then you target sub-system 6.00 and SDK version 8. Version 6.00 was the last major kernel revision, started with Vista.
A brief overview of the new api functions being used to give you a (very rough) idea what is missing in the XP version:
FlsAlloc, FlsFree, FlsGetValue, FlsSetValue : safe thread-local storage
InitializeCriticalSectionEx, CreateSemaphoreEx : safety
SetThreadStackGuarantee : stability
CreateThreadPoolTimer, SetThreadPoolTimer, WaitForThreadPoolTimerCallbacks, CloseThreadPoolTimer : cheaper timers
CreateThreadPoolWait, SetThreadPoolWait, CloseThreadPoolWait : cheaper waits?
FlushProcessWriteBuffers, GetCurrentProcessorNumber, GetLogicalProcessorInformation : threading
FreeLibraryWhenCallbackReturns : stability?
CreateSymbolicLink : functionality
InitOnceExecuteOnce : unknown
SetDefaultDllDirectories : unknown
EnumLocalesEx, CompareStringEx, GetDateFormatEx, GetLocalInfoEx, GetTimeFormatEx, GetUserDefaultLocaleName, IsValidLocaleName, LCMapStringEx : better locale support
I figured it out myself. (But thank you Hans for steering me in the right direction.) For some reason, even with Update 1 and even after setting my toolset to v110_xp, and setting the minimum required version to 5.01 in the Linker options, the resulting dumpbin app.exe /headers still reports a minimum operating system version of 6.0.
So I simply ran
editbin.exe app.exe /SUBSYSTEM:CONSOLE,5.01 /OSVERSION:5.1
And the executable now runs just fine on older operating systems. I'm thinking there still might be a little bit of a bug somewhere in Visual Studio.
The MSVC Team Blog says that when using MSBuild or DEVENV from the command-line with the v110_xp platform toolset, no other changes are necessary. This information is incorrect/incomplete. The /SUBSYSTEM linker argument and associated "Minimum Required Version" must also be set appropriately.
The MSDN documentation for /ENTRY states that, if the /SUBSYSTEM argument is not specified that the SUBSYSTEM and ENTRY POINT are determined automatically. My hunch is that when this happens, the SUBSYSTEM's "Minimum Required Version" argument is also automatically overridden.
The v110_xp toolset automatically specifies the SUBSYSTEM's MRV ("5.1" (WindowsXP)) but not the SUBSYSTEM. As such, the MRV will be overridden, for example, by the linker to "6.0". Running the application will then cause WindowsXP to show the error message stating that the application "is not a valid Win32 application."
I was just going through SO and found out a question Determining CPU utilization
The question is interesting and the one which is more intersting is the answer.
So i thought doing some checks on my solaris SPARC unix system.
i went to /proc as root user and i found out some directories with numbers as their names.
I think these numbers are the process id's.Surprisingly i did not find /stat.(donno why?..)
i took one process id(one directory) and checked whats present inside it.below is the output
root#tiger> cd 11770
root#tiger> pwd
/proc/11770
root#tiger> ls
as contracts ctl fd lstatus lwp object path psinfo root status watch
auxv cred cwd lpsinfo lusage map pagedata priv rmap sigact usage xmap
i did check what are those files :
root#tigris> file *
as: empty file
auxv: data
contracts: directory
cred: data
ctl: cannot read: Invalid argument
cwd: directory
fd: directory
lpsinfo: data
lstatus: data
lusage: data
lwp: directory
map: TrueType font file version 1.0 (TTF)
object: directory
pagedata: cannot read: Arg list too long
path: directory
priv: data
psinfo: data
rmap: TrueType font file version 1.0 (TTF)
root: directory
sigact: ascii text
status: data
usage: data
watch: empty file
xmap: TrueType font file version 1.0 (TTF)
i am not sure ..given this how can i determine the cpu utilization?
for eg: what is the idle time of my process.
can anyone give me the right direction?
probably with an example!
As no one else is taking the bait, I'll add some comments/answers.
1st off, Did you check out the info available for Solaris System tuning? This is for old Solarian, 2.6, v7 & 8. Presumably a little searching at developers.sun.com will find something newer.
You wrote:
I went to /proc as root user and i found out some directories with numbers as their names. I think these numbers are the process id's.Surprisingly i did not find /stat.(donno why?..)
Many non-Linux OS's have their own special conventions on how processes are managed.
For Solaris, the /proc directory is not a directory of disk-based files, but information about all of the active system processes arranged like a directory hierarchy. Cool, right?!
I don't know the exact meaning of stat, status? statistics? something else? but that is just the convention used a different OS's directory structure that is holding the process information.
As you have discovered, below /proc/ are a bunch of numbered entries, these are the active processIDs. When you cd into any one of those, then you're seeing the system information available for that process.
I did check what are those files : ....
I don't have access to Solaris servers any more, so we'll have to guess a little. I recommend 'drilling down' into any file or directory whose name hints at anything related.
Did you try cat psinfo? What did that produce?
If the solaris tuning page didn't help, then is your appropos is working? Do appropos proc and see what man-pages are mentioned. Drill down on those. Else try man proc, andd look near the bottom of the entry for the 'see also' section AND for the Examples section.
(Un)?fortunately, most man pages are not tutorials, so reading through these may only give you an idea on how much more you need to study.
You know about the built-in commands that offer some performance monitoring capabilities, i.e. ps, top, etc?
And the excellent AIX-based nmon has been/is being? ported to Solaris too, see http://sourceforge.net/projects/sarmon/.
There are also expensive monitoring/measuring/utilization tools that network managers like to have, as a mere developer, we never got to use them. Look at the paid ads when you google for 'solaris performance monitoring'.
Finally, keep in mind this excellent observation from the developer of the nmon-AIX system monitor included in the FAQ for nmon :
If you keep using shorter and shorter periods you will eventually see that the CPUs are either 100% busy or 100% idle all the other numbers are just a feature of humans not thinking fast enough and having to average out the CPU use in longer periods.
I hope this helps.
There is no simple and accurate way to get the CPU utilization from Solaris /proc hierarchy.
Unlike Linux which use it to store various system information and statistics, Solaris is only presenting process related data under /proc.
There is also another difference. Linux is usually presenting preprocessed readable data (text) while Solaris is always presenting the actual kernel structures or raw data (binary).
All of this is fully documented in Solaris 46 pages proc manual ( man -s 4 proc )
While it would be possible to get the CPU utilization by summing the usage per process from this hierarchy, i.e. by reading the /proc//xxx file, the usual way is through the Solaris kstat (kernel statistics) interface. Moreover, the former method would be inaccurate by missing CPU usage not accounted to processes but directly to the kernel.
kstat (man -a kstat) is what are using under the hood all the usual commands that report what you are looking for like vmstat, iostat, prstat, sar, top and the likes.
For example, cpu usage is displayed in the last three columns of vmstat output (us, sy and id for time spend in userland, kernel and idling).
$ vmstat 10 8
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd s0 -- -- in sy cs us sy id
0 0 0 1346956 359168 34 133 96 0 0 0 53 11 0 0 0 264 842 380 9 7 84
0 0 0 1295084 275292 0 4 4 0 0 0 0 0 0 0 0 248 288 200 2 3 95
0 0 0 1295080 275276 0 0 0 0 0 0 0 3 0 0 0 252 271 189 2 3 95
0 0 0 1295076 275272 0 14 0 0 0 0 0 0 0 0 0 251 282 189 2 3 95
0 0 0 1293840 262364 1137 1369 4727 0 0 0 0 131 0 0 0 605 1123 620 15 19 66
0 0 0 1281588 224588 127 561 750 1 1 0 0 89 0 0 0 438 1840 484 51 15 34
0 0 0 1275392 217824 31 115 233 2 2 0 0 31 0 0 0 377 821 465 20 8 72
0 0 0 1291532 257892 0 0 0 0 0 0 0 8 0 0 0 270 282 219 2 3 95
If for some reason you don't want to use vmstat, you can directly get the kstat counters by using the kstat command but that would be cumbersome and less portable.
I have a large .ps file. There are some variables defined but I am not able to figure the length of the variable.
I am picking some part of the code, it is like a four column and multiple row matrix.
.../$v.HDLE_UNIT_TOTAL ex 31.752 103.752 220.4 2 10.0 0.0 0 vsE //31.752 must b x axis?
/$v.PKGE_QTY_TOTAL ex 69.48 213.48 220.4 2 10.0 0.0 0 vsE.... //213.48 must b y asix?
// what defines the length of that variable?
It is completely impossible to say what this PostScript snippet does without access to the complete PostScript program/file.
At the very least one needs to know what vsE means. This must be defined elsewhere in the file (more to the front in a statement similar to
/vsE {some-PS-code} def