My Elixir app is using about 50% of the CPU, but it really should only be using <1%. I'm trying to figure out what is causing the high CPU usage and I'm having some trouble.
In a remote console, I tried
Listing all processes with Process.list
Looking at the process info with Process.info
Sorting the processes by reduction count
Sorting the processes by message queue length
The message queues are all close to 0, but the reduction counts are very high for some processes. The processes with high reduction counts are named
:file_server_2
ReactPhoenix.ReactIo.Pool
:code_server
(1) and (3) are both present in my other apps, so I feel like it must be (2). This is where I'm stuck. How can I go further and figure out why (2) is using so much CPU?
I know that ReactPhoenix uses react-stdio. Looking at top, react-sdtio doesn't use any resources, but the beam does.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 87 53.2 1.2 2822012 99212 ? Sl Nov20 580:03 /app/erts-9.1/bin/beam.smp -Bd -- -root /app -progname app/releases/0.0.1/hello.sh -- -home /root -- -noshell -noshell -noinput -boot /app/
root 13873 0.0 0.0 4460 792 ? Rs 13:54 0:00 /bin/sh -c deps/react_phoenix/node_modules/.bin/react-stdio
I saw in this StackOverflow post that stdin can cause resource issues, but I'm unsure if that applies here. Anyway, any help would be greatly appreciated!
Did you try etop?
iex(2)> :etop.start
========================================================================================
nonode#nohost 14:57:45
Load: cpu 0 Memory: total 26754 binary 143
procs 51 processes 8462 code 7201
runq 0 atom 292 ets 392
Pid Name or Initial Func Time Reds Memory MsgQ Current Function
----------------------------------------------------------------------------------------
<0.6.0> erl_prim_loader '-' 458002 109280 0 erl_prim_loader:loop
<0.38.0> code_server '-' 130576 196984 0 code_server:loop/1
<0.33.0> application_controll '-' 58731 831632 0 gen_server:loop/7
<0.88.0> etop_server '-' 58723 109472 0 etop:data_handler/2
<0.53.0> group:server/3 '-' 19364 2917928 0 group:server_loop/3
<0.61.0> disk_log:init/2 '-' 16246 318352 0 disk_log:loop/1
<0.46.0> file_server_2 '-' 3838 18752 0 gen_server:loop/7
<0.51.0> user_drv '-' 3720 13832 0 user_drv:server_loop
<0.0.0> init '-' 2559 34440 0 init:loop/1
<0.37.0> kernel_sup '-' 2093 58600 0 gen_server:loop/7
========================================================================================
http://erlang.org/doc/man/etop.html
Related
I'm trying to find all the shortest path from one source node to all other destination node (so 1-3, 1-5, 1-4) with the relative cost for each shortest path.
I've tried with this code
node(1..5).
edge(1,2,1).
edge(2,3,9).
edge(3,4,4).
edge(4,1,4).
edge(1,3,1).
edge(3,5,7).
start(1).
end(3).
end(4).
end(5).
0{selected(X,Y)}1:-edge(X,Y,W).
path(X,Y):-selected(X,Y).
path(X,Z):-path(X,Y),path(Y,Z).
:-start(X),end(Y),not path(X,Y).
cost(C):-C=#sum{W,X,Y:edge(X,Y,W),selected(X,Y)}.
#minimize{C:cost(C)}.
#show selected/2.
but my code return this answer
> `clingo version 5.6.0 (c0a2cf99)
> Reading from stdin
> Solving...
> Answer: 1
> selected(3,4) selected(1,3) selected(3,5)
> Optimization: 12
> OPTIMUM FOUND
>
> Models : 1
> Optimum : yes
> Optimization : 12
> Calls : 1
> Time : 0.043s (Solving: 0.00s 1st Model: 0.00s Unsat: 0.00s)
> CPU Time : 0.000s`
What is wrong? How can I enumerate all shortest paths with relative costs?
Surely an error is that you are aggregating all the costs in C but, if I have correctly understood, you need distinct costs depending on the ending node.
Then there may be also other errors, but I can't exactly understand what do you mean with that program.
I would write it as follows:
node(1..5) .
edge(1,2,1) .
edge(2,3,9) .
edge(3,4,4) .
edge(4,1,4) .
edge(1,3,1) .
edge(3,5,7) .
start(1) .
end(3) .
end(4) .
end(5) .
% For each destination E, some outgoing edge from the start node should be selected
:- start(S), end(E), not selected(S,_,E) .
% No edge pointing to the start node should be selected
:- start(S), selected(_,S,_) .
% If an edge points to the end node, then it may be (or not be) selected for reaching it
0{selected(X,E,E)}1 :- edge(X,E,_), end(E) .
% If an outgoing edge from Y has been selected for reaching E, then an incoming edge may be (or not be) selected for reaching E
0{selected(X,Y,E)}1 :- edge(X,Y,_), selected(Y,_,E) .
% Compute the cost for reaching E
cost(E,C) :- C=#sum{W : edge(X,Y,W), selected(X,Y,E)}, end(E) .
#minimize{C : cost(E,C)} .
#show selected/3 .
#show cost/2 .
The execution of the above program is as follows:
clingo version 5.3.0
Reading from test.lp
Solving...
Answer: 1
selected(3,5,5) selected(1,3,3) selected(3,4,4) selected(1,3,4) selected(1,3,5) cost(3,1) cost(4,5) cost(5,8)
Optimization: 14
OPTIMUM FOUND
Models : 1
Optimum : yes
Optimization : 14
Calls : 1
Time : 0.017s (Solving: 0.00s 1st Model: 0.00s Unsat: 0.00s)
CPU Time : 0.000s
where:
an atom select(X,Y,Z) indicates that the edge (X,Y) has been selected for reaching the node Z;
an atom cost(E,C) indicates that the minimum cost for reaching the end node E is C.
The starting node is implicit since it is unique.
I'm unable to write 1514 bytes (including the L2 information) via write to /dev/bpf. I can write smaller packets (meaning I think the basic setup is correct), but I see "Message too long" with the full-length packets. This is on Solaris 11.2.
It's as though the write is treating this as the write of an IP packet.
Per the specs, there 1500 bytes for the IP portion, 14 for the L2 headers (18 if tagging), and 4 bytes for the checksum.
I've set the feature that I thought would prevent the OS from adding its own layer 2 information (yes, I also find it odd that a 1 disables it; pseudo code below):
int hdr_complete = 1;
ioctl(bpf, BIOCSHDRCMPLT, &hdr_complete);
The packets are never larger than 1514 bytes (they're captured via a port span and start with the source and destination MAC addresses; I'm effectively replaying them).
I'm sure I'm missing something basic here, but I'm hitting a dead end. Any pointers would be much appreciated!
Partial Answer: This link was very helpful.
Update 3/20/2017
Code works on Mac OS X, but on Solaris results in repeated "Interrupted system call" (EINTR). I'm starting to read scary things about having to implement signal handling, which I'd rather not do...
Sample code on GitHub based on various code I've found via Google. On most systems you have to run this with root privileges unless you've granted "net_rawaccess" to the user.
Still trying to figure out the EINTR issue. Output from truss:
27158/1: 0.0122 0.0000 write(3, 0x08081DD0, 1514) Err#4 EINTR
27158/1: \0 >E1C09B92 4159E01C694\b\0 E\005DC82E1 #\0 #06F8 xC0A81C\fC0A8
27158/1: 1C eC8EF14 Q nB0BC 4 V #FBDE8010FFFF8313\0\00101\b\n ^F3 W # C E
27158/1: d SDD G14EDEB ~ t sCFADC6 qE3C3B7 ,D9D51D VB0DFB0\b96C4B8EC1C90
27158/1: 12F9D7 &E6C2A4 Z 6 t\bFCE5EBBF9C1798 r 4EF "139F +A9 cE3957F tA7
27158/1: x KCD _0E qB9 DE5C1 #CAACFF gC398D9F787FB\n & &B389\n H\t ~EF81
27158/1: C9BCE0D7 .9A1B13 [ [DE\b [ ECBF31EC3 z19CDA0 #81 ) JC9 2C8B9B491
27158/1: u94 iA3 .84B78AE09592 ;DA ] .F8 A811EE H Q o q9B 8A4 cF1 XF5 g
27158/1: EC ^\n1BE2C1A5C2 V 7FD 094 + (B5D3 :A31B8B128D ' J 18A <897FA3 u
EDIT 7 April 2017
The EINTR problem was the result of a bug in the sample code that I placed on GitHub. The code was not associating the bpf device with the actual interface and Solaris was throwing the EINTR as a result.
Now I'm back to the "message too long" problem that I still haven't resolved.
Question
For a host machine that uses the token bucket algorithm for congestion control, the token bucket has a capacity of 1 mega byte and the maximum output rate is 20 mega bytes per second. Tokens arrive at a rate to sustain output at a rate of 10 mega bytes per second. The token bucket is currently full and the machine needs to send 12 mega bytes of data. The minimum time required to transmit the data is _____________ seconds.
My Approach
Initially token bucket is full. the rate at which it is emptying is (20-10) Mbps. time take to empty token bucket of 1 mb is 1/10 i.e 0.1 sec
But answer is given as 1.2sec .
Token bucket has a capacity of 1 mega byte (maximum capacity C )
Here one byte is considered as one token
⇒ C=1 M tokens
output rate is 20 mega bytes per second (M=20MBps)
Tokens arrive at a rate to sustain output at a rate of 10 mega bytes per second
⇒20-R=10
⇒ Input Rate R=10MBps
Unlike Leaky Bucket , idle hosts can capture and save up c ≤ C tokens in order to send larger bursts later. s
When we begin transfer the tokens present in token buckt is transmitted at once to the network
ie. if initially capacity of token bucket is 'c' then c tokens will
be instantly be present in the network.
Time to Empty the token bucket
c: is the inital capacity of token bucket
R: every sec we are getting R tokens
M : evey seconds M tokens are produced
INPUT FLOW : Then the number of packets that are ready to enter the network during a time interval 't' is c+Rt
OUTPUT FLOW : Then the number of packets that are ready to enter the network during a time interval 't' is Mt
INPUT FLOW = OUTPUT FLOW
⇒ c+Rt = Mt
t= c/M-R =1/20-10 =0.1sec
Given that Token bucket is full (c=C)
Now , We have got two cases
To transfer 1M tokens , Will it be instantly with t=0
Or to transfer 1M tokens , we take 10/ 20-10 = 0.1sec ?
To transfer 1M (inital token) tokens , Will it be instantly with t=0
Consider the equation
INPUTFLOW = c+Rt
This means that
" c tokens (initally contained in token bucket ) are transmitted without any delays "
Unlike Leaky bucket , token buckets can keep on reserving token if the sender is idle .Once it is ready to send the packets . Packets will take the token and will be transmitted to the network. ⇒ c And then we are adding the R tokens produced in 't' time to finnaly get the INPUTFLOW
⇒ 1 MB is transmitted instantly . Now we are left with 11 MB to transmit
To trnasfer remaining 11 MB
at t=0 we begin transmitting 11 MB data.
at t=0.1sec : 1MB (1 MB transfered)
at t=0.2sec : 1MB (2 MB transfered)
..
..
at t=1.1 sec : 1MB (11 MB transfered )
Therefore to transfer 12MB it takes 1.1sec + 0 sec = 1.1 sec
Transfer 1M (inital token) tokens , we take = 0.1sec
( if it take 0.1 sec for 1MB i could argue that it will take 1.2ssec for 12MB )
then during 0.1sec . 01 *10MBps = 1M tokens are fulled up .
t=0s : begin to transfer 12 MB data.
t=0.1s : 1MB
t=0.2s : 1MB (2 MB transfered)
t=0.3s : 1MB (3 MB transfered)
..
..
t=1.2s : 1MB (12 MB transfered)
Therefore to transfer 12MB it takes 1.2sec
Question does clearly mention about this part . Hence it is common practice to always follw the best case .
Therefore the answer would be 1.1 sec
More Information : Visit Gate Overflow - Gate 2016 Question on Token Bucket
I'm not sure whether it is Linux kernel bug while I searched many documents and could not find any hint.
I am asking this question to check if anyone has met similar issue and how to solve this.
Environment:
Linux Kernel: 2.6.34.10
CPU: MIPS 64 (total 8 cores)
application running in user space`
There are strict response time requirement with application, so the application threads were set in SCHED_FIFO, and some key threads are affinity to dedicated CPU core, everything is fine in this case. Later someone found that CPU peak happen (e.g. 60%-80% in short peak) sometimes in some CPU cores. To solve this, kept CPU 0 and CPU 7 for Linux native application, and isolated CPU 1-6 for our applications by adding "isolcpus=1-6" in boot line, issue of CPU peak was solved, while it lead the following issue.
The following message will be printed in console after running some time and system hangup, not always but sporadically. (it might happen in multiple CPU cores)
BUG: soft lockup - CPU#4 stuck for 61s! [swapper:0]
Modules linked in: hdml softdog cmt cmm pio clock linux_kernel_bde linux_uk_proxy linux_bcm_core mpt2sas
Cpu 4
$ 0 : 0000000000000000 ffffffffc3600020 ffffffffc1000b00 c0000001006f0010
$ 4 : 0000000000000001 0000000000000001 000000005410f8e0 ffffffffbfff00fe
$ 8 : 000000000000001e ffffffffc15b3c80 0000000000000002 0d0d0d0d0d0d0d0d
$12 : 0000000000000000 000000004000f800 0000000000000000 c000000100768000
$16 : ffffffffc36108e0 0000000000000010 ffffffffc35f0000 0000000000000000
$20 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000
$24 : 0000000000000007 ffffffffc103b3a0
$28 : c0000001006f0000 c0000001006f3e38 0000000000000000 ffffffffc103d774
Hi : 0000000000000000
Lo : 003d0980b38a5000
epc : ffffffffc1000b20 r4k_wait+0x20/0x40
Not tainted
ra : ffffffffc103d774 cpu_idle+0xbc/0xc8
Status: 5410f8e3 KX SX UX KERNEL EXL IE
Cause : 40808000
looked at the callback trace, the thread was always pending on conditional variable wait, the pseudo wait/signal fnction is as following
int xxx_ipc_wait(int target)
{
struct timespec to;
.... /* other code */
clock_gettime(CLOCK_MONOTONIC, &to);
timespec_add_ns(&to, 1000000);
pthread_mutex_lock(&ipc_queue_mutex[target]);
ret = pthread_cond_timedwait (&ipc_queue_cond[target], &ipc_queue_mutex[target], &to);
pthread_mutex_unlock(&ipc_queue_mutex[target]);
return ret;
}
void xxx_ipc_signal_atonce(int target)
{
...
pthread_mutex_lock(&ipc_queue_mutex[target]);
pthread_cond_signal(&ipc_queue_cond[target]);
pthread_mutex_unlock(&ipc_queue_mutex[target]);
}
those waits should wakeup anyhow because it is timeout conditional variable. Even created a dedicated Linux thread to signal those conditional variable timely, e.g. every 5 seconds, issue still there.
checked the kernel log with "dmesg" and didn't found any valuable log. when enabled the kernel debug and check the kernel log /proc/sched_debug, there are strange information as following.
cpu#1 /* it is a normal CPU core */
.nr_running : 1
.load : 0
.nr_switches : 1892378
.nr_load_updates : 167378
.nr_uninterruptible : 0
.next_balance : 4295.060682
.curr->pid : 235 /* it point to the runnable tasks */
task PID tree-key switches prio exec-runtime sum-exec sum-sleep
----------------------------------------------------------------------------------------------------------
R aaTask 235 0.000000 157 49 0 0
cpu#4
.nr_running : 1 /* okay */
.load : 0
.nr_switches : 2120455 /* this value changes from time to time */
.nr_load_updates : 185729
.nr_uninterruptible : 0
.next_balance : 4295.076207
.curr->pid : 0 /* why this is ZERO since it has runable task */
.clock : 746624.000000
.cpu_load[0] : 0
.cpu_load[1] : 0
.cpu_load[2] : 0
.cpu_load[3] : 0
.cpu_load[4] : 0
cfs_rq[4]:/
.exec_clock : 0.000000
.MIN_vruntime : 0.000001
.min_vruntime : 14.951424
.max_vruntime : 0.000001
.spread : 0.000000
.spread0 : -6833.777140
.nr_running : 0
.load : 0
.nr_spread_over : 0
.shares : 0
rt_rq[4]:/
.rt_nr_running : 1
.rt_throttled : 1
.rt_time : 900.000000
.rt_runtime : 897.915785
runnable tasks:
task PID tree-key switches prio exec-runtime sum-exec sum-sleep
----------------------------------------------------------------------------------------------------------
bbbb_appl 299 6.664495 1059441 49 0 0 0.000000 0.000000 0.000000 /
I don't know why Linux system work like this, and finally, I changed the task priority from SCHED_FIFO to SCHED_OTHER, and this issue didn't happen after months' running. since CPU core is isolated, so system's behavior is similar between SCHED_FIFO and SCHED_OTHER, also SCHED_OTHER is more widely used.
App waiting on a condition/mutex forever might be a sign of priority inversion, unless it's using priority inheritance enabled synchronization primitives.
In the FIFO realtime scheduling mode the thread has CPU until it voluntarily gives it up. Which is quite different from the preemptive multitasking most software is written for.
Unless your software explicitly has REALTIME_FIFO in the configuration requirements, I would not spend time and rather stick with RR and/or CPU pinning/isolation.
I'm trying to learn how to deal with NURBS surfaces for a project. Basically I wan't to build a geometry in some 3D program with NURBS, then export the geometry, and run some simulations with it. I have figured out the NURBS curve, and I do think I mostly understand how surfaces work, but what I don't get is how the control points are connected. Apparently you don't need any topology matrix as with polygons? When I export NURBS surfaces from Maya, in the file format .ma, which is plain text file, I can see the knot vectors, and then just a list of points. No topology information. How does this work? How can you reconstruct the NURBS surface without knowing how the points are connected to each other? The exported file is written below:
//Maya ASCII 2013 scene
//Name: test4.ma
//Last modified: Sat, Jan 26, 2013 07:21:36 PM
//Codeset: UTF-8
requires maya "2013";
requires "stereoCamera" "10.0";
currentUnit -l centimeter -a degree -t film;
fileInfo "application" "maya";
fileInfo "product" "Maya 2013";
fileInfo "version" "2013 x64";
fileInfo "cutIdentifier" "201207040330-835994";
fileInfo "osv" "Mac OS X 10.8.2";
fileInfo "license" "student";
createNode transform -n "loftedSurface1";
setAttr ".t" -type "double3" -0.68884794895562784 0 -3.8172687581953233 ;
createNode nurbsSurface -n "loftedSurfaceShape1" -p "loftedSurface1";
setAttr -k off ".v";
setAttr ".vir" yes;
setAttr ".vif" yes;
setAttr ".covm[0]" 0 1 1;
setAttr ".cdvm[0]" 0 1 1;
setAttr ".dvu" 0;
setAttr ".dvv" 0;
setAttr ".cpr" 4;
setAttr ".cps" 4;
setAttr ".cc" -type "nurbsSurface"
3 3 0 0 no
8 0 0 0 1 2 3 3 3
11 0 0 0 1 2 3 4 5 6 6 6
54
0.032814107781307778 -0.01084889661073064 -2.5450696958149557
0.032814107781308312 -0.010848896610730773 -1.6967131305433036
0.032824475105651972 -0.010848896610730714 -0.0016892641735144487
0.032777822146102309 -0.01084889661073018 2.5509821204222565
0.032948882997777158 -0.010848896610730326 5.3256822304677218
0.032311292550627417 -0.010848896610730283 7.5033561343333179
0.034690593487551526 -0.010848896610730296 11.39484483093603
0.014785648001686571 -0.010848896610730293 11.972583607988943
-0.00012526283089935193 -0.010848896610730293 12.513351622510489
0.87607723187763198 -0.023973071493875439 -2.5450696958149557
0.87607723187766595 -0.023973071493876091 -1.6967131305433036
0.87636198619878247 -0.023973071493875821 0.00026157734839016289
0.87508059175355446 -0.023973071493873142 2.5441541750955903
0.87977903805225144 -0.023973071493873861 5.3510431702524812
0.86226664730269065 -0.02397307149387367 7.4087403205209448
0.9276177640022375 -0.023973071493873725 11.747947146400762
0.39164345444212556 -0.023973071493873704 12.72679599298271
-0.003344290659457324 -0.023973071493873708 13.356608602511475
2.7585407036097025 0.080696275184513055 -2.5450696958149557
2.7979735813230628 0.036005680442686323 -1.6988092981025378
2.7828331201271896 0.05438167150027777 0.0049374879309111996
2.6143679292284574 0.23983328019207673 2.5309327393956176
2.67593270347135 0.19013709747074492 5.3992530024698517
2.5981387973985108 0.20347021966427298 7.2291224273514345
2.8477496474469728 0.19983391361149261 12.418208886861429
1.1034136098865515 0.20064198162322153 14.474560637904968
-0.010126299867110311 0.20064198162322155 15.133224682698101
4.5214126649737496 0.45953483463333544 -2.5450696958149557
4.6561826938778452 0.23941045408996731 -1.7369291398229287
4.6267725925384751 0.29043329565744253 0.025561242784985394
3.9504978751410711 1.3815767918640129 2.5159293599869446
4.1596851721552888 1.0891788615080038 5.438642765250469
3.9992107014958198 1.1676270867254697 7.0865667556376426
4.4319212871194775 1.1462321162116154 12.949041810935984
1.6384310220676352 1.1509865541035829 15.927795222282771
-0.015643773215464073 1.1509865541035829 16.578582772395933
5.2193823159440154 3.0233786192453191 -2.5450696958149557
5.2193823159440162 3.0233786192453196 -1.6967131305433036
5.2218229691816047 3.0233786192453191 0.0091618497226043649
5.2108400296124504 3.0233786192453196 2.5130032217858407
5.251110808032692 3.0233786192453191 5.4667467111172652
5.1010106339208772 3.0233786192453191 6.9770771103715621
5.6611405519478906 3.0233786192453205 13.358896446133507
2.0430537629341199 3.0233786192453183 17.059047057656215
-0.019924192630756767 3.0233786192453191 17.6998820408444
5.1365144716134976 5.4897102753589557 -2.5450696958149557
5.1365144716134994 5.4897102753589566 -1.6967131305433036
5.1389093836131625 5.4897102753589566 0.0089946049919694682
5.1281322796146718 5.4897102753589566 2.5135885783430627
5.1676483276091361 5.4897102753589548 5.4645725296190131
5.0203612396297714 5.4897102753589566 6.9851884798073476
5.5699935435527692 5.4897102753589566 13.328625149888618
2.0133428487217855 5.4897102753589557 16.975388787391935
-0.01960785732642523 5.4897102753589557 17.617014800296868
;
select -ne :time1;
setAttr ".o" 1;
setAttr ".unw" 1;
select -ne :renderPartition;
setAttr -s 2 ".st";
select -ne :initialShadingGroup;
setAttr ".ro" yes;
select -ne :initialParticleSE;
setAttr ".ro" yes;
select -ne :defaultShaderList1;
setAttr -s 2 ".s";
select -ne :postProcessList1;
setAttr -s 2 ".p";
select -ne :defaultRenderingList1;
select -ne :renderGlobalsList1;
select -ne :hardwareRenderGlobals;
setAttr ".ctrs" 256;
setAttr ".btrs" 512;
select -ne :defaultHardwareRenderGlobals;
setAttr ".fn" -type "string" "im";
setAttr ".res" -type "string" "ntsc_4d 646 485 1.333";
select -ne :ikSystem;
setAttr -s 4 ".sol";
connectAttr "loftedSurfaceShape1.iog" ":initialShadingGroup.dsm" -na;
// End of test4.ma
A NURBS surface is allays topologically square with points of degree+spans in u direction and (degree-1)+spans+1* in v direction. (a single NURBS surface is like one face of a polygon only more complicated)
The first 2 attributes in ".cc" are the degree in direction, and the next two lines define the knots each individual value represents a span. Duplicates are just weights so the point is repeated x times so:
8 0 0 0 1 2 3 3 3
Means there 8 knots (in this case in U direction) with 0 1 2 3 spans for a total of 6 points so it's a single span curve of third degree in U direction. The example has 9 points in V direction thus 7*9 = 54 points in total
This is not enough however, for NURBS to be even remotely useful. You must implement trim curves which are curves that lay on the UV parametrization of the surface and they can clip the individual NURBS to different shape.
In practice however maya users rely on manual quilting. Quilts** are the higher order NURBS equivalent of a mesh, that most nurbs modelers use as a concept. To handle these its often not enough to have even the trim curves. As trim curves cannot be reliably transported between applications, without sewing. Thus many applications rely on actually telling what the spatial history of said surface to surface quilt collections topographical connection is. So be prepared to make your own intersection algorithms etc., etc., for any meaningful NURBS compatibility.
For more on the mathematical underpinning info see Wikipedia, wolfram etc.
* If I remember correctly something like that.
** Quilts have different names in different applications due to simultaneous discovery on in several different language areas.
NURBS surfaces' CVs are always laid out in a grid. The number of CVs in a nurbs surface can be computed using the degree of the surface and the number of knots in each direction. Then the CVs are just presented in some specific order, typically row-major.
Let's look at your example. I'm mostly just guessing the format, so you'll want to check my assumptions.
3 3 0 0 no
It looks like you have a bicubic surface. It's not periodic in either direction (that is, you have a sheet rather than a cylinder or torus). Your CVs are non-rational, meaning they're [x,y,z] instead of [xw,yw,zw,w].
In other words, the format of that first line appears to be:
[degree in s] [degree in t] [periodic in s] [periodic in t] [rational]
Next up, one knot vector has 8 knot values, and the other has 11. For a degree 3 non-periodic nurbs, the number of CVs is num_knots - 2. So, you have 6 x 9 CVs in this surface.
The first 6 CVs are in the first row. The next 6 are in the next row, etc.
If you're looking for more information on NURBS, I'd recommend this text for theory. For maya specific stuff, they have some decent documentation in the maya API.