I use FFMPEG and Imagemagick to extract the color palette from an image or video with a Windows batch file,
:: get current folder name
for %%* in (.) do set CurrDirName=%%~nx*
:: get current filename
for /R %1 %%f in (.) do (
set CurrFileName=%%~nf
)
ffmpeg -i "%1" -vf palettegen "_%CurrFileName%_temp_palette.png"
convert "_%CurrFileName%_temp_palette.png" -filter point -resize 4200%% "_%CurrFileName%_palette.png"
del "_%CurrFileName%_temp_palette.png"
This outputs something like,
I need this to have better transition throughout the color blocks though, like all blues from darkest to lightest then transitioning to greens, yellows etc. like,
Is there a way/switch to create this with either Imagemagick/FFMPEG?
I don't like working on Windows, but I wanted to show you a technique you could use. I have therefore written it in bash but avoided nearly all Unix-y stuff and made it very simple. In order to run it on Windows, you would only need ImageMagick and awk and sort which you can get for Windows from here and here.
I'll demonstrate using an image of random data that the script creates on around the third line:
Here is the script. It is pretty well commented and should convert easily enough to Windows if you like what it does.
#!/bin/bash
# Create an initial image of random data 100x100
convert -size 100x100 xc:gray +noise random image.png
# Extract unique colours in image and convert to HSL colourspace and thence to text for "awk"
convert image.png -unique-colors -colorspace hsl -depth 8 txt: | awk -F"[(), ]+" '
!/^#/{
H=$3; S=$4; L=$5 # Pick up HSL. For Hue, 32768=180deg, 65535=360deg. For S&L, 32768=50%, 65535=100%
NGROUPS=4 # Change this according to the number of groups of colours you want
bin=65535/NGROUPS # Calculate bin size
group=int(int(H/bin)*bin) # Split Hue into NGROUPS
printf "%d %d %d %d\n",group,H,S,L
}' > groupHSL.txt
# Sort by column 1 (group) then by column 4 (Lightness)
sort -n -k1,1 -k4,4 < groupHSL.txt > groupHSL-sorted.txt
# Reassemble the sorted pixels back into a simple image, 16-bit PNM format of HSL data
# Discard the group in column 1 ($1) that we used to sort the data
awk ' { H[++i]=$2; S[i]=$3; L[i]=$4 }
END {
printf "P3\n"
printf "%d %d\n",i,1
printf "65535\n"
for(j=1;j<=i;j++){
printf "%d %d %d\n",H[j],S[j],L[j]
}
}' groupHSL-sorted.txt > HSL.pnm
# Convert HSL.pnm to sRGB.png
convert HSL.pnm -set colorspace hsl -colorspace sRGB sRGB.png
# Make squareish shape
convert sRGB.png -crop 1x1 miff:- | montage -geometry +0+0 -tile 40x - result.png
If I set NGROUPS to 10, I get:
If I set NGROUPS to 4, I get:
Note that, rather than using pipes and shell tricks, the script generates intermediate files so you can easily see each stage of the processing in order to debug it.
For example, if you run this:
convert image.png -unique-colors -colorspace hsl -depth 8 txt: | more
you will see the output that convert is passing to awk:
# ImageMagick pixel enumeration: 10000,1,255,hsl
0,0: (257,22359,1285) #015705 hsl(1.41176,34.1176%,1.96078%)
1,0: (0,0,1542) #000006 hsl(0,0%,2.35294%)
2,0: (41634,60652,1799) #A2EC07 hsl(228.706,92.549%,2.7451%)
3,0: (40349,61166,1799) #9DEE07 hsl(221.647,93.3333%,2.7451%)
4,0: (31868,49858,2056) #7CC208 hsl(175.059,76.0784%,3.13725%)
5,0: (5140,41377,3341) #14A10D hsl(28.2353,63.1373%,5.09804%)
6,0: (61423,59367,3598) #EFE70E hsl(337.412,90.5882%,5.4902%)
If you look at groupHSL-sorted.txt, you will see how the pixels have been sorted into groups and then increasing lightness:
0 0 53456 10537
0 0 18504 20303
0 0 41377 24158
0 0 21331 25700
0 0 62708 28270
0 0 53199 31354
0 0 23130 32896
0 0 8738 33410
0 0 44204 36494
0 0 44204 36751
0 0 46260 38293
0 0 56283 40606
0 0 53456 45489
0 0 0 46517
0 0 32896 46517
0 0 16191 50372
0 0 49601 55769
0 257 49601 11565
0 257 42148 14392
0 257 53713 14649
0 257 50115 15677
0 257 48830 16191
Windows is particularly awkward at quoting things - especially scripts in single quotes like I use above for awk. I would suggest you extract the two separate awk scripts into separate files something like this:
script1.awk
!/^#/{
H=$3; S=$4; L=$5 # Pick up HSL. For Hue, 32768=180deg, 65535=360deg. For S&L, 32768=50%, 65535=100%
NGROUPS=4 # Change this according to the number of groups of colours you want
bin=65535/NGROUPS # Calculate bin size
group=int(int(H/bin)*bin) # Split Hue into NGROUPS
printf "%d %d %d %d\n",group,H,S,L
}
and
script2.awk
{ H[++i]=$2; S[i]=$3; L[i]=$4 }
END {
printf "P3\n"
printf "%d %d\n",i,1
printf "65535\n"
for(j=1;j<=i;j++){
printf "%d %d %d\n",H[j],S[j],L[j]
}
}
Then the two lines in the main script will become something like:
convert image.png -unique-colors -colorspace hsl -depth 8 txt: | awk -F"[(), ]+" -f script1.awk > groupHSL.txt
and
awk -f script2.awk groupHSL-sorted.txt > HSL.pnm
Related
I have different folders with datasets called e.g.
3-1-1
3-1-2
3-2-1
3-1-2
the first placeholder is fixed, the second and third are elements of a list:
k1values = "1 2"
k2values = "1 2"
I want to do easy operations in my Gnuplot script e.g. cd to the above directories and read a line of a textfile. First, it shall cd to the folder, read a file and cd back again etc.
My first (1) idea was to connect system command and sprintf:
do for[i=1:words(k1values)]{
do for[j=1:words(k2values)]{
system sprintf("cd 3-%d-%d", i, j)
system 'pwd'
system 'cd ..'
}
}
with that the same path is being printed, so no CD is happening at all.
or system 'cd sprintf("3-%d-%d", i, j)'
Unfortunately, this is not working.
Error message: sh: 1: Syntax error: "(" unexpected
I also tried concatenating the values to a string and enter it as a path: This also doesn't work:
k1values = "1 2"
k2values = "1 2"
string1 = '3'
do for[i=1:words(k1values)]{
do for[j=1:words(k2values)]{
path = sprintf("%s-%d-%d", string1, i, j)
system sprintf("cd %s", path)
system 'pwd'
system 'cd ..'
}
}
I print the path for testing, but the operating path is not being changed at all.
Thanks in advance!
Edit: The idea in a given pseudo code is like this:
do for k1
do for k2
valueX = <readingCommand>
make dir "3-k1-k2/Pictures"
for int i = 0; i<valueX; i++
set output bla
plot "3-k1-k2/Data/i.txt" <options>
end for
end do for
end do for
Unless there is a reason which we don't know yet, why do you want to change back and forth into the subdirectories?
Why not creating your path/filename via a function and load the desired file and plot the desired lines?
For example, if you have the following directory structure:
CurrentFolder
3-1-1
Data.dat
3-1-2
Data.dat
3-2-1
Data.dat
3-2-2
Data.dat
and the following files:
3-1-1/Data.dat
1 1.14
2 1.15
3 1.12
4 1.11
5 1.13
3-1-2/Data.dat
1 1.24
2 1.25
3 1.22
4 1.21
5 1.23
3-2-1/Data.dat
1 2.14
2 2.15
3 2.12
4 2.11
5 2.13
3-2-2/Data.dat
1 2.24
2 2.25
3 2.22
4 2.21
5 2.23
The following example loads all the files Data.dat from the corresponding subdirectories and plots the lines 2 to 4 (the lines have 0-based index, check help every).
Script:
### plot specific lines from files from different directories
reset session
k1values = "1 2"
k2values = "1 2"
string1 = '3'
myPath(i,j) = sprintf("%s-%s-%s",string1,word(k1values,i),word(k2values,j))
myFile(i,j) = sprintf("%s/%s",myPath(i,j),"Data.dat")
set key out
plot for [i=1:words(k1values)] for[j=1:words(k2values)] myFile(i,j) \
u 1:2 every ::1::3 w lp pt 7 ti myPath(i,j)
### end of script
Result:
This is my final solution:
k1values = '0.5 1'
k2values = '0.5 1'
omega = 3
do for[i in k1values]{
do for[j in k2values]{
savingPoint = system('head -n 1 "3-'.i.'-'.j.'/<fileName>.dat" | tail -1')
number = savingPoint/<value>
do for[m = savingPoint:0:-<value>]{
set title <...>
set output <...>
plot ''.omega.'-'.i.'-'.j.'/Data/'.m.'.txt' <...>
}
}
}
<...> is a placeholder and irrelevant.
So this is how I finally iterate over the folders.
Within the second for loop, a reading command is executed and allocated to a variable which is needed in the third for loop. i and j are strings though, but that does not matter.
I have this small geo location dataset.
37.9636140,23.7261360
37.9440840,23.7001760
37.9637190,23.7258230
37.9901450,23.7298770
From a random location.
For example this one 37.97570, 23.66721
I need to create a bash command with awk that returns the distances with simple euclidean distance.
This is the command i use
awk -v OFMT=%.17g -F',' -v long=37.97570 -v lat=23.66721 '{for (i=1;i<=NR;i++) distances[i]=sqrt(($1 - long)^2 + ($2 - lat)^2 ); a[i]=$1; b[i]=$2} END {for (i in distances) print distances[i], a[i], b[i]}' filename
When I run this command i get this weird result which is not correct, could someone explain to me what am I doing wrong?
➜ awk -v OFMT=%.17g -F',' -v long=37.97570 -v lat=23.66721 '{for (i=1;i<=NR;i++) distances[i]=sqrt(($1 - long)^2 + ($2 - lat)^2 ); a[i]=$1; b[i]=$2} END {for (i in distances) print distances[i], a[i], b[i]}' filename
44,746962127881936 37.9440840 23.7001760
44,746962127881936 37.9901450 23.7298770
44,746962127881936 37.9636140 23.7261360
44,746962127881936
44,746962127881936 37.9637190 23.7258230
Updated.
Appended the command that #jas provided, I included od -c as #mark-fuso suggetsted.
The issue now is that I get different results from #jas
Command output which showcases the new issue.
awk -v OFMT=%.17g -F, -v long=37.97570 -v lat=23.66721 '
{distance=sqrt(($1 - long)^2 + ($2 - lat)^2 ); print distance, $1, $2}
' file
1,1820150904705098 37.9636140 23.7261360
1,1820150904705098 37.9440840 23.7001760
1,1820150904705098 37.9637190 23.7258230
1,1820150904705098 37.9901450 23.7298770
od -c that shows the content of the input file.
od -c file
0000000 3 7 . 9 6 3 6 1 4 0 , 2 3 . 7 2
0000020 6 1 3 6 0 \n 3 7 . 9 4 4 0 8 4 0
0000040 , 2 3 . 7 0 0 1 7 6 0 \n 3 7 . 9
0000060 6 3 7 1 9 0 , 2 3 . 7 2 5 8 2 3
0000100 0 \n 3 7 . 9 9 0 1 4 5 0 , 2 3 .
0000120 7 2 9 8 7 7 0 \n
0000130
While #jas has provided a 'fix' for the problem, thought I'd throw in a few comments about what OP's code is doing ...
Some basics ...
the awk program ({for (i=1;i<=NR;i++) ... ; b[i]=$2}) is applied against each row of the input file
as each row is read from the input file the awk variable NR keeps track of the row number (ie, NR=1 for the first row, NR=2 for the second row, etc)
on the last pass through the for loop the counter (i in this case) will have a value of NR+1 (ie, the i++ is applied on the last pass through the loop thus leaving i=NR+1)
unless there are conditional checks for each line of input the awk program will apply against every line from the input file (including blank lines - more on this below)
for (i in distances)... isn't guaranteed to process the array indices in numerical order
The awk/for loop is doing the following:
for the 1st input row (NR=1) we get for (i=1;i<=1;i++) ...
for the 2nd input row (NR=2) we get for (i=1;i<=2;i++) ...
for the 3rd input row (NR=3) we get for (i=1;i<=3;i++) ...
for the 4th input row (NR=4) we get for (i=1;i<=4;i++) ...
For each row processed by awk the program will overwrite all previous entries in the distance[] array; net result is the last row (NR=4) will place the same values in all 4 entries of the the distance[] array.
The a[i]=$1; b[i]=$2 array assignments occur outside the scope of the for loop so these will be assigned once per input row (ie, will not be overwritten) however, the array assignments are being made with i=NR+1; net result is the contents of the 1st row (NR=1) are stored in array entries a[2] and b[2], the contents of the 2nd row (NR=2) are stored in array entries a[3] and a[3], etc.
Modifying OP's code with print i, distances[i], a[i], b[i]} and running against the 4-line input file I get:
1 0.064310270672728084 # no data for 2nd/3rd columns because a[1] and b[1] are never set
2 0.064310270672728084 37.9636140 23.7261360 # 2nd/3rd columns are from 1st row of input
3 0.064310270672728084 37.9440840 23.7001760 # 2nd/3rd columns are from 2nd row of input
4 0.064310270672728084 37.9637190 23.7258230 # 2nd/3rd columns are from 3rd row of input
From this we can see the first column of output is the same (ie, distance[1]=distance[2]=distance[3]=distance[4]), while the 2nd and 3rd columns are the same as the input columns except they are shifted 'down' by one row.
That leaves us with two outstanding issues ...
why does OP show 5 lines of output?
why is the first column consist of the garbage 44,746962127881936?
I was able to reproduce this issue by adding a blank line on the end of my input file:
$ cat geo.dat
37.9636140,23.7261360
37.9440840,23.7001760
37.9637190,23.7258230
37.9901450,23.7298770
<<=== blank line !!
Which generates the following with OP's awk code:
44.746962127881936
44.746962127881936 37.9636140 23.7261360
44.746962127881936 37.9440840 23.7001760
44.746962127881936 37.9637190 23.7258230
44.746962127881936 37.9901450 23.7298770
NOTES:
this order is different from OP's sample output and is likely due to OP's awk version not processing for (i in distances)... in numerical order; OP can try something like for (i=1;i<=NR;i++)... or for (i=1;i in distances; i++)... (though the latter will not work correcly for a sparsely populated array)
OPs output (in the question; in comment to #jas' answer) shows a comma (,) in place of the period (.) for the first column so I'm guessing OP's env is using a locale that switches the comma/period as thousands/decimal delimiter (though the input data is based on an 'opposite' locale)
Notice we finally get to see the data from the 4th line of input (shifted 'down' and displayed on line 5) but the first column has what appears to be a nonsensical value ... which can be tracked back to applying the following against a blank line:
sqrt(($1 - long)^2 + ($2 - lat)^2 )
sqrt(( - long)^2 + ( - lat)^2 ) # empty line => $1 = $2 = undefined/empty
sqrt(( - 37.97570)^2 + ( - 23.66721^2 )
sqrt( 1442.153790 + 560.136829 )
sqrt( 2002.290619 )
44.746952... # contents of 1st column
To 'fix' this issue the OP can either a) remove the blank line from the input file or b) add some logic to the awk script to only perform calculations if the input line has (numeric) values in fields #1 & #2 (ie, $1 and $2 are not empty); it's up to the coder to decide on how much validation to apply (eg, are the fields numeric, are the fields within the bounds of legitimate long/lat values, etc).
One last design-related comment ... as demonstrated in jas' answer there is no need for any of the arrays (which in turn reduces memory usage) when all desired output can generated 'on-the-fly' while processing each line of the input file.
Awk takes care of the looping for you. The code will be run in turn for each line of the input file:
$ awk -v OFMT=%.17g -F, -v long=37.97570 -v lat=23.66721 '
{distance=sqrt(($1 - long)^2 + ($2 - lat)^2 ); print distance, $1, $2}
' file
0.060152679674309095 37.9636140 23.7261360
0.045676346307474212 37.9440840 23.7001760
0.059824979147508742 37.9637190 23.7258230
0.064310270672728084 37.9901450 23.7298770
EDIT:
OP is getting different results. I notice in OP's output that there are commas instead of decimal points when printing the distance. This points to a possible issue with the locale setting.
OP confirms that the locale was set for greek, causing the difference in output.
I have a number of files (N>1000) with qtl summary data e.g. lets assume the first file is made of six lines (in reality they are all GWAs/imputed files with >10M SNPs)
cat QTL.1.txt
Chr Rs BP beta se pvalue
11 rs11224233 134945522 0.150216 0.736939 0.962375
11 rs4616056 134945709 0.129518 0.371824 0.910326
11 rs11823417 134945710 0.103462 0.41737 0.845826
11 rs80294507 134945765 0.150336 0.735363 0.961403
11 rs61907173 134946034 0.104531 0.158224 0.884548
11 rs147621717 134946277 0.105365 0.196168 0.86476
I would like to filter each of these datasets based on chromosome and positions of a list of genes (my list has 100 genes but now lest assume it has 2); therefore creating N_QTL*N_Genes files. I would like to go through each gene/position for each QTL. The Chromosome, positions and name of the genes are stored in four arrays and I would like to read iteratively these arrays and save the output for each qtl file for each gene.
What I have done so far doesnt work and I know awk is not the best way to do this:
declare -a array1
declare -a array2
declare -a array3
declare -a array4
array1=(11 11) #chromosome
array2=(134945709 134945765) #start gene position
array3=(134946034 134946277) #end gene position
array4=(A B) # gene name
for qtl in 1; do # in reality it would be for qtl in 1 1000
for ((i=0; i<${#array1[#]}; i++)); do
cat QTL.$qtl.txt | awk '$1=='array1[$i]' && $3>='array2[$i]' &&
$3<='array3[$i]' {print$0}' > Gene.${array4[$i]}_QTL.$qtl.txt;
done;
done
within awk $1 is the chromosome and $3 the position- so therefore filtering based on these.
So my expected output for QTL.1.txt for Gene A would be
cat Gene.A_QTL.1.txt
Chr Rs BP beta se pvalue
11 rs4616056 134945709 0.129518 0.371824 0.910326
11 rs11823417 134945710 0.103462 0.41737 0.845826
11 rs80294507 134945765 0.150336 0.735363 0.961403
11 rs61907173 134946034 0.104531 0.158224 0.884548
And for QTL.1.txt for Gene B would be
cat Gene.B_QTL.1.txt
Chr Rs BP beta se pvalue
11 rs80294507 134945765 0.150336 0.735363 0.961403
11 rs61907173 134946034 0.104531 0.158224 0.884548
11 rs147621717 134946277 0.105365 0.196168 0.86476
I end up with empty files as probably the way I ask these columns to be filtered based on the values of the arrays doesnt work.
Any help very much appreciated!
Thank you in advance
Mixing bash and awk for parsing files is not always the best way forward.
Here a solution with awk only.
Assume you have the information assigned to your bash array in a file:
$ cat info
11 134945765 154945765 Gene1
12 134945522 174945522 Gene2
You could use the following awk script to perform a lookup with the data file:
awk 'NR==FNR{
for(i=2;i<=NF;i++)
a[$1,i]=$i
next
}
a[$1,2]<=$3 && a[$1,3]>=$3{
print $0 > a[$1,4]"_QTL"
}' info QTL.1.txt
This will create a file with the following content:
$ cat Gene1_QTL
11 rs80294507 134945765 0.150336 0.735363 0.961403
11 rs61907173 134946034 0.104531 0.158224 0.884548
11 rs147621717 134946277 0.105365 0.196168 0.86476
Maybe not exactly what you're looking at, but yet I hope this is helpful...
You might want to do the following if multiple genes are located in the same chromosome (using gene name instead of chr as Key):
awk 'NR==FNR{
chr[$4]=$1;
start[$4]=$2;
end[$4]=$3;
}
NR!=FNR{
for (var in chr){
name=var"_"FILENAME;
if(chr[var]==$1 && start[var] <=$3 && end[var]>=$3){
print $0 > name;
}
}
}' info QTL
I have a set of *.txt files, called test0001.txt, test0002.txt, .....
I want to convert the data of these files to *.png images (Out0001.png, ...).
set xrange [0:50]
set yrange [0:50]
set size square
set nokey
set pointsize 0.5
set terminal png size 1024,1024
do for [t=1:50] {
outfile = sprintf('Out%04.0f.png',t)
set output outfile
plot ('test%04.0f.txt',t) using 1:2 with points pt 7 lc rgb "black"
}
I get "';' expected in line 12" as error. Just typing in the *.txt file names works btw:
plot 'test0001.txt' using 1:2 with points pt 7 lc rgb "black"
This just creates the first image 50 times.
The first thing I'd be looking at was this line:
plot ('test%04.0f.txt',t) using 1:2 with points pt 7 lc rgb "black"
Since an earlier line used sprintf to do this string formatting, shouldn't you be looking at something like:
plot sprintf('test%04.0f.txt',t) using 1:2 with points pt 7 lc rgb "black"
Or, in toto, something like:
set xrange [0:50]
set yrange [0:50]
set size square
set nokey
set pointsize 0.5
set terminal png size 1024,1024
do for [t=1:50] {
inpfile = sprintf('Out%04.0f.txt',t)
outfile = sprintf('Out%04.0f.png',t)
set output outfile
plot inpfile using 1:2 with points pt 7 lc rgb "black"
}
I'm trying to learn how to deal with NURBS surfaces for a project. Basically I wan't to build a geometry in some 3D program with NURBS, then export the geometry, and run some simulations with it. I have figured out the NURBS curve, and I do think I mostly understand how surfaces work, but what I don't get is how the control points are connected. Apparently you don't need any topology matrix as with polygons? When I export NURBS surfaces from Maya, in the file format .ma, which is plain text file, I can see the knot vectors, and then just a list of points. No topology information. How does this work? How can you reconstruct the NURBS surface without knowing how the points are connected to each other? The exported file is written below:
//Maya ASCII 2013 scene
//Name: test4.ma
//Last modified: Sat, Jan 26, 2013 07:21:36 PM
//Codeset: UTF-8
requires maya "2013";
requires "stereoCamera" "10.0";
currentUnit -l centimeter -a degree -t film;
fileInfo "application" "maya";
fileInfo "product" "Maya 2013";
fileInfo "version" "2013 x64";
fileInfo "cutIdentifier" "201207040330-835994";
fileInfo "osv" "Mac OS X 10.8.2";
fileInfo "license" "student";
createNode transform -n "loftedSurface1";
setAttr ".t" -type "double3" -0.68884794895562784 0 -3.8172687581953233 ;
createNode nurbsSurface -n "loftedSurfaceShape1" -p "loftedSurface1";
setAttr -k off ".v";
setAttr ".vir" yes;
setAttr ".vif" yes;
setAttr ".covm[0]" 0 1 1;
setAttr ".cdvm[0]" 0 1 1;
setAttr ".dvu" 0;
setAttr ".dvv" 0;
setAttr ".cpr" 4;
setAttr ".cps" 4;
setAttr ".cc" -type "nurbsSurface"
3 3 0 0 no
8 0 0 0 1 2 3 3 3
11 0 0 0 1 2 3 4 5 6 6 6
54
0.032814107781307778 -0.01084889661073064 -2.5450696958149557
0.032814107781308312 -0.010848896610730773 -1.6967131305433036
0.032824475105651972 -0.010848896610730714 -0.0016892641735144487
0.032777822146102309 -0.01084889661073018 2.5509821204222565
0.032948882997777158 -0.010848896610730326 5.3256822304677218
0.032311292550627417 -0.010848896610730283 7.5033561343333179
0.034690593487551526 -0.010848896610730296 11.39484483093603
0.014785648001686571 -0.010848896610730293 11.972583607988943
-0.00012526283089935193 -0.010848896610730293 12.513351622510489
0.87607723187763198 -0.023973071493875439 -2.5450696958149557
0.87607723187766595 -0.023973071493876091 -1.6967131305433036
0.87636198619878247 -0.023973071493875821 0.00026157734839016289
0.87508059175355446 -0.023973071493873142 2.5441541750955903
0.87977903805225144 -0.023973071493873861 5.3510431702524812
0.86226664730269065 -0.02397307149387367 7.4087403205209448
0.9276177640022375 -0.023973071493873725 11.747947146400762
0.39164345444212556 -0.023973071493873704 12.72679599298271
-0.003344290659457324 -0.023973071493873708 13.356608602511475
2.7585407036097025 0.080696275184513055 -2.5450696958149557
2.7979735813230628 0.036005680442686323 -1.6988092981025378
2.7828331201271896 0.05438167150027777 0.0049374879309111996
2.6143679292284574 0.23983328019207673 2.5309327393956176
2.67593270347135 0.19013709747074492 5.3992530024698517
2.5981387973985108 0.20347021966427298 7.2291224273514345
2.8477496474469728 0.19983391361149261 12.418208886861429
1.1034136098865515 0.20064198162322153 14.474560637904968
-0.010126299867110311 0.20064198162322155 15.133224682698101
4.5214126649737496 0.45953483463333544 -2.5450696958149557
4.6561826938778452 0.23941045408996731 -1.7369291398229287
4.6267725925384751 0.29043329565744253 0.025561242784985394
3.9504978751410711 1.3815767918640129 2.5159293599869446
4.1596851721552888 1.0891788615080038 5.438642765250469
3.9992107014958198 1.1676270867254697 7.0865667556376426
4.4319212871194775 1.1462321162116154 12.949041810935984
1.6384310220676352 1.1509865541035829 15.927795222282771
-0.015643773215464073 1.1509865541035829 16.578582772395933
5.2193823159440154 3.0233786192453191 -2.5450696958149557
5.2193823159440162 3.0233786192453196 -1.6967131305433036
5.2218229691816047 3.0233786192453191 0.0091618497226043649
5.2108400296124504 3.0233786192453196 2.5130032217858407
5.251110808032692 3.0233786192453191 5.4667467111172652
5.1010106339208772 3.0233786192453191 6.9770771103715621
5.6611405519478906 3.0233786192453205 13.358896446133507
2.0430537629341199 3.0233786192453183 17.059047057656215
-0.019924192630756767 3.0233786192453191 17.6998820408444
5.1365144716134976 5.4897102753589557 -2.5450696958149557
5.1365144716134994 5.4897102753589566 -1.6967131305433036
5.1389093836131625 5.4897102753589566 0.0089946049919694682
5.1281322796146718 5.4897102753589566 2.5135885783430627
5.1676483276091361 5.4897102753589548 5.4645725296190131
5.0203612396297714 5.4897102753589566 6.9851884798073476
5.5699935435527692 5.4897102753589566 13.328625149888618
2.0133428487217855 5.4897102753589557 16.975388787391935
-0.01960785732642523 5.4897102753589557 17.617014800296868
;
select -ne :time1;
setAttr ".o" 1;
setAttr ".unw" 1;
select -ne :renderPartition;
setAttr -s 2 ".st";
select -ne :initialShadingGroup;
setAttr ".ro" yes;
select -ne :initialParticleSE;
setAttr ".ro" yes;
select -ne :defaultShaderList1;
setAttr -s 2 ".s";
select -ne :postProcessList1;
setAttr -s 2 ".p";
select -ne :defaultRenderingList1;
select -ne :renderGlobalsList1;
select -ne :hardwareRenderGlobals;
setAttr ".ctrs" 256;
setAttr ".btrs" 512;
select -ne :defaultHardwareRenderGlobals;
setAttr ".fn" -type "string" "im";
setAttr ".res" -type "string" "ntsc_4d 646 485 1.333";
select -ne :ikSystem;
setAttr -s 4 ".sol";
connectAttr "loftedSurfaceShape1.iog" ":initialShadingGroup.dsm" -na;
// End of test4.ma
A NURBS surface is allays topologically square with points of degree+spans in u direction and (degree-1)+spans+1* in v direction. (a single NURBS surface is like one face of a polygon only more complicated)
The first 2 attributes in ".cc" are the degree in direction, and the next two lines define the knots each individual value represents a span. Duplicates are just weights so the point is repeated x times so:
8 0 0 0 1 2 3 3 3
Means there 8 knots (in this case in U direction) with 0 1 2 3 spans for a total of 6 points so it's a single span curve of third degree in U direction. The example has 9 points in V direction thus 7*9 = 54 points in total
This is not enough however, for NURBS to be even remotely useful. You must implement trim curves which are curves that lay on the UV parametrization of the surface and they can clip the individual NURBS to different shape.
In practice however maya users rely on manual quilting. Quilts** are the higher order NURBS equivalent of a mesh, that most nurbs modelers use as a concept. To handle these its often not enough to have even the trim curves. As trim curves cannot be reliably transported between applications, without sewing. Thus many applications rely on actually telling what the spatial history of said surface to surface quilt collections topographical connection is. So be prepared to make your own intersection algorithms etc., etc., for any meaningful NURBS compatibility.
For more on the mathematical underpinning info see Wikipedia, wolfram etc.
* If I remember correctly something like that.
** Quilts have different names in different applications due to simultaneous discovery on in several different language areas.
NURBS surfaces' CVs are always laid out in a grid. The number of CVs in a nurbs surface can be computed using the degree of the surface and the number of knots in each direction. Then the CVs are just presented in some specific order, typically row-major.
Let's look at your example. I'm mostly just guessing the format, so you'll want to check my assumptions.
3 3 0 0 no
It looks like you have a bicubic surface. It's not periodic in either direction (that is, you have a sheet rather than a cylinder or torus). Your CVs are non-rational, meaning they're [x,y,z] instead of [xw,yw,zw,w].
In other words, the format of that first line appears to be:
[degree in s] [degree in t] [periodic in s] [periodic in t] [rational]
Next up, one knot vector has 8 knot values, and the other has 11. For a degree 3 non-periodic nurbs, the number of CVs is num_knots - 2. So, you have 6 x 9 CVs in this surface.
The first 6 CVs are in the first row. The next 6 are in the next row, etc.
If you're looking for more information on NURBS, I'd recommend this text for theory. For maya specific stuff, they have some decent documentation in the maya API.