I was just going through SO and found out a question Determining CPU utilization
The question is interesting and the one which is more intersting is the answer.
So i thought doing some checks on my solaris SPARC unix system.
i went to /proc as root user and i found out some directories with numbers as their names.
I think these numbers are the process id's.Surprisingly i did not find /stat.(donno why?..)
i took one process id(one directory) and checked whats present inside it.below is the output
root#tiger> cd 11770
root#tiger> pwd
/proc/11770
root#tiger> ls
as contracts ctl fd lstatus lwp object path psinfo root status watch
auxv cred cwd lpsinfo lusage map pagedata priv rmap sigact usage xmap
i did check what are those files :
root#tigris> file *
as: empty file
auxv: data
contracts: directory
cred: data
ctl: cannot read: Invalid argument
cwd: directory
fd: directory
lpsinfo: data
lstatus: data
lusage: data
lwp: directory
map: TrueType font file version 1.0 (TTF)
object: directory
pagedata: cannot read: Arg list too long
path: directory
priv: data
psinfo: data
rmap: TrueType font file version 1.0 (TTF)
root: directory
sigact: ascii text
status: data
usage: data
watch: empty file
xmap: TrueType font file version 1.0 (TTF)
i am not sure ..given this how can i determine the cpu utilization?
for eg: what is the idle time of my process.
can anyone give me the right direction?
probably with an example!
As no one else is taking the bait, I'll add some comments/answers.
1st off, Did you check out the info available for Solaris System tuning? This is for old Solarian, 2.6, v7 & 8. Presumably a little searching at developers.sun.com will find something newer.
You wrote:
I went to /proc as root user and i found out some directories with numbers as their names. I think these numbers are the process id's.Surprisingly i did not find /stat.(donno why?..)
Many non-Linux OS's have their own special conventions on how processes are managed.
For Solaris, the /proc directory is not a directory of disk-based files, but information about all of the active system processes arranged like a directory hierarchy. Cool, right?!
I don't know the exact meaning of stat, status? statistics? something else? but that is just the convention used a different OS's directory structure that is holding the process information.
As you have discovered, below /proc/ are a bunch of numbered entries, these are the active processIDs. When you cd into any one of those, then you're seeing the system information available for that process.
I did check what are those files : ....
I don't have access to Solaris servers any more, so we'll have to guess a little. I recommend 'drilling down' into any file or directory whose name hints at anything related.
Did you try cat psinfo? What did that produce?
If the solaris tuning page didn't help, then is your appropos is working? Do appropos proc and see what man-pages are mentioned. Drill down on those. Else try man proc, andd look near the bottom of the entry for the 'see also' section AND for the Examples section.
(Un)?fortunately, most man pages are not tutorials, so reading through these may only give you an idea on how much more you need to study.
You know about the built-in commands that offer some performance monitoring capabilities, i.e. ps, top, etc?
And the excellent AIX-based nmon has been/is being? ported to Solaris too, see http://sourceforge.net/projects/sarmon/.
There are also expensive monitoring/measuring/utilization tools that network managers like to have, as a mere developer, we never got to use them. Look at the paid ads when you google for 'solaris performance monitoring'.
Finally, keep in mind this excellent observation from the developer of the nmon-AIX system monitor included in the FAQ for nmon :
If you keep using shorter and shorter periods you will eventually see that the CPUs are either 100% busy or 100% idle all the other numbers are just a feature of humans not thinking fast enough and having to average out the CPU use in longer periods.
I hope this helps.
There is no simple and accurate way to get the CPU utilization from Solaris /proc hierarchy.
Unlike Linux which use it to store various system information and statistics, Solaris is only presenting process related data under /proc.
There is also another difference. Linux is usually presenting preprocessed readable data (text) while Solaris is always presenting the actual kernel structures or raw data (binary).
All of this is fully documented in Solaris 46 pages proc manual ( man -s 4 proc )
While it would be possible to get the CPU utilization by summing the usage per process from this hierarchy, i.e. by reading the /proc//xxx file, the usual way is through the Solaris kstat (kernel statistics) interface. Moreover, the former method would be inaccurate by missing CPU usage not accounted to processes but directly to the kernel.
kstat (man -a kstat) is what are using under the hood all the usual commands that report what you are looking for like vmstat, iostat, prstat, sar, top and the likes.
For example, cpu usage is displayed in the last three columns of vmstat output (us, sy and id for time spend in userland, kernel and idling).
$ vmstat 10 8
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd s0 -- -- in sy cs us sy id
0 0 0 1346956 359168 34 133 96 0 0 0 53 11 0 0 0 264 842 380 9 7 84
0 0 0 1295084 275292 0 4 4 0 0 0 0 0 0 0 0 248 288 200 2 3 95
0 0 0 1295080 275276 0 0 0 0 0 0 0 3 0 0 0 252 271 189 2 3 95
0 0 0 1295076 275272 0 14 0 0 0 0 0 0 0 0 0 251 282 189 2 3 95
0 0 0 1293840 262364 1137 1369 4727 0 0 0 0 131 0 0 0 605 1123 620 15 19 66
0 0 0 1281588 224588 127 561 750 1 1 0 0 89 0 0 0 438 1840 484 51 15 34
0 0 0 1275392 217824 31 115 233 2 2 0 0 31 0 0 0 377 821 465 20 8 72
0 0 0 1291532 257892 0 0 0 0 0 0 0 8 0 0 0 270 282 219 2 3 95
If for some reason you don't want to use vmstat, you can directly get the kstat counters by using the kstat command but that would be cumbersome and less portable.
Related
I'm trying to understand where uboot stores the environment variables in emmc. I have the following set in the uboot config file -
CONFIG_ENV_IS_NOWHERE=y
CONFIG_ENV_IS_IN_MMC=y
CONFIG_SYS_MMC_ENV_DEV=2
CONFIG_SYS_MMC_ENV_PART=0
CONFIG_ENV_SIZE=0x4000
CONFIG_ENV_OFFSET=0x400000
CONFIG_ENV_SECT_SIZE=0x10000
uboot is able to save the env variables in emmc and I can read them from Linux. Variables set from Linux are also readable in uboot. I fail to understand which partition has the environment variables?
Output lsblk from Linux -
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
mtdblock0 31:0 0 16M 0 disk
mmcblk2 179:0 0 7.3G 0 disk
|-mmcblk2p1 179:1 0 83.2M 0 part /run/media/mmcblk2p1
|-mmcblk2p2 179:2 0 1.7G 0 part /
|-mmcblk2p3 179:3 0 83.2M 0 part /run/media/mmcblk2p3
|-mmcblk2p4 179:4 0 1K 0 part
|-mmcblk2p5 179:5 0 1.7G 0 part /run/media/mmcblk2p5
`-mmcblk2p6 179:6 0 1G 0 part /run/media/mmcblk2p6
mmcblk2boot0 179:32 0 4M 1 disk
mmcblk2boot1 179:64 0 4M 1 disk
mmcblk2boot0 is where uboot is located which is 4MB in size, considering the environment is saved at an offset of 0x400000 in mmcblk2 device, it should be located just after mmcblk2boot0 which points to the parition mmcblk2boot1, but if I dump the strings "strings /dev/mmcblk2boot1", I get nothing.
I have provided a file /etc/fw_config which is used by fw_printenv and fw_saveenv, this file contains "/dev/mmcblk2 0x400000 0x10000". All the settings point to the fact that uboot environment is located at an offset of 0x400000 in mmcblk2 device. Any pointers on which partition listed in output of lsblk holds the uboot environment variables..?
Thanks,
Vinay
The confusing thing here is that CONFIG_SYS_MMC_ENV_PART does not refer to sofware partitions, such as /dev/mmcblk2p1 that you see under Linux, but rather hardware partitions such as /dev/mmcblk2 (partition 0) or /dev/mmcblk2boot0 (partition 1).
in our project we have to use the GreenHillsCompiler Suite MULTI. So everything is configured and running. Reading the compiler manual I found an option for the linker which will generate a callgraph.
I added the option to the linker (elxr) in the makefile with
LINK_OPT += -callgraph
which generates a file with the extension ".graph" in the output folder. This files just contains plain text.
Function Function Call Call Count Percent of Total Max Displacement (bits)
#% BEGIN STATIC GRAPH
myFunc out 0 in 3
out 0 100%
in 3 100%
myFunc2 1 33% 0 2514 0
.static00012204 1 33% 0 514 0
.static0001220b 1 33% 0 1300 0
#% END STATIC GRAPH
So the question is: What tool has to be used further?
What we want is an image or a html-document.
That is the graph. It is in text format.
Note that if you want a graphical representation of a function's call graph, from within the MULTI debugger you can right-click a function and select Browse Other -> Browse Static Calls.
I have a server running HP UX 11 OS and i'm trying to have I/O statistics for File System (not disk).
Fo example , I have 50 disk attached to the server when I type iostat (here under output of iostat for 3 disks) :
disk9 508 31.4 1.0
disk10 53 1.5 1.0
disk11 0 0.0 1.0
And I have File Systems (df output) :
/c101 (/dev/VGAPPLI/c101_lv): 66426400 blocks 1045252 i-nodes
/c102 (/dev/VGAPPLI/c102_lv): 360190864 blocks 5672045 i-nodes
/c103 (/dev/VGAPPLI/c103_lv): 150639024 blocks 2367835 i-nodes
/c104 (/dev/VGAPPLI/c104d_lv): 75852825 blocks 11944597 i-nodes
Is it possible to have I/O statistic specifically for these File System ?
Thanks.
If I use
/usr/bin/time -f"%e,%P,%M,%I,%O"
I get (for the last three placeholders) the memory the process used, and if there was some input and output during it.
Obviously, it's easy to get %e or something like it using sys/time.h, but is there a way to get %M, %I and %O programmatically?
You could read and parse the files in the /proc filesystem. /proc/self refers to the process accessing the /proc filesystem.
/proc/self/statm contains information about memory usage, measured in pages. Sample output:
% cat /proc/self/statm
1115 82 63 12 0 79 0
Fields are size resident share text lib data dt; see the proc manual page for some additional details.
/proc/self/io contains the I/O for the current process. Sample output:
% cat /proc/self/io
rchar: 2012
wchar: 0
syscr: 6
syscw: 0
read_bytes: 0
write_bytes: 0
cancelled_write_bytes: 0
Unfortunately, io isn't documented in the proc manual page (at least on my Debian system). I had too check the iotop source code to see how it obtained the per process I/O information.
I'm trying to use a somewhat old DAQ, and had to jump through a few hoops to get an old (circa 2004) device driver for it to compile (DTI-DT340 Linux-DAQ-PCI).
I've gotten to the point where it compiles, I can load the kernel module, it finds the card, and I can create the character devices using mknod.
But I can't seem to open these devices and keep getting errno 19 (ENODEV) 'No such device' when I try to
open("/dev/dt340/0",O_RDWR);
but mknod had no complaints about making it, and it's there:
# ls -l /dev/dt340/
total 0
crw-rw-r-- 1 root staff 250, 0 2009-04-23 11:02 0
crw-rw-r-- 1 root staff 250, 1 2009-04-23 11:02 1
crw-rw-r-- 1 root staff 250, 2 2009-04-23 11:02 2
crw-rw-r-- 1 root staff 250, 3 2009-04-23 11:02 3
Is there something I'm neglecting to do? What might be a reason open fails?
Here's the script I use to load the driver and make the devices.
#!/bin/bash
module="dt340"
device="dt340"
mode="664"
# invoke modprobe with all arguments we were passed
#/sbin/modprobe -t misc -lroot -f -s $module.o $* || exit 1
insmod $module.ko
# remove stale nodes
rm -f /dev/${device}/[0-3]
major=`awk "\\$2==\"$module\" {print \\$1}" /proc/devices`
mkdir -p /dev/${device}
mknod /dev/${device}/0 c $major 0
mknod /dev/${device}/1 c $major 1
mknod /dev/${device}/2 c $major 2
mknod /dev/${device}/3 c $major 3
# give appropriate group/permissions, and change the group
# not all distributions have staff; some have "users" instead
group="staff"
grep '^staff:' /etc/group > /dev/null || group="users"
chgrp $group /dev/${device}/[0-3]
chmod $mode /dev/${device}/[0-3]
Some additional info:
#grep dt340 /proc/devices
250 dt340
# lsmod | grep dt340
dt340 21516 0
# tail /var/log/messages
Apr 23 11:59:26 ve kernel: [ 412.862139] dt340 0000:03:01.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22
Apr 23 11:59:26 ve kernel: [ 412.862362] dt340: In function dt340_init_one:
Apr 23 11:59:26 ve kernel: [ 412.862363] Device DT340 Rev 0x0 detected at address 0xfebf0000
#lspci | grep 340
03:01.0 Multimedia controller: Data Translation DT340
ANSWER: A printk confirmed that the -ENODEV was thrown from inside open(). Following an oldstyle
while ((pdev = pci_find_device(PCI_VENDOR_ID_DTI, PCI_ANY_ID, pdev)))
(which is deprecated), if(!pdev) ends up true, and returns the -ENODEV.
I'm inching closer - I guess I have to work through and update the pci code to use more modern mechanisms...
If the device shows up in /proc/devices, and you're sure you've got the number right in mknod, then the driver itself is refusing the open. The driver can return any error code from open() - including "no such device", which it might if it discovered a problem initialising the hardware.
I'd guess it is a problem in the driver, check the open function.
It shows up in /proc/devices, so all the generic device stuff seems to be ok.
mknod doesn't care if there is an device corresponding to the given major/minor numbers. Are you sure insmod is installing your module? What does lsmod tell you?
I'm unfamiliar with having to add the ".ko" extension. Is that something specific to your device driver?
Check through lspci and make sure hardware is properly initialized. If your system supports hotplug, pci_find_device won't work. The problem with this is a refcnt. The best way to deal and learn is to dissect the API. BOL !!