Issue to duplicate Ellipse-Arc inside DXF file - file

I am trying to write a code to generate ellipse/arc using DXF file format. The number of ellipses depend on some parameters. I don't have issue to generate one ellipse or two. But when the number of ellipses increase, eventually the file gets corrupted.
I found out it's the name of the ellipse should follow a rule in naming which I am not sure what system autoCAD is using for the naming.
I extracted these names from the ellipes which were gernerated by autoCAD
D1, D3, 87, 92, 98, 9E, A4, AA, B0m B6, BC, C2, C3, C9, CF, D5, D7, D9
my question is what system/rule does autoCAD use for the naming?
If you notice that autoCAD uses D1 in the first ellipse, and then D3
0
ELLIPSE
5
D1
330
70
100
AcDbEntity
8
0
100
AcDbEllipse
10
8.193371416945673
20
6.584439091463058
30
0.0
11
0.0
21
0.9445114593901811
31
0.0
210
0.0
220
0.0
230
1.0
40
0.9770115006281081
41
3.141592653589792
42
4.712388980384688
0
ELLIPSE
5
D3
330
70
100
AcDbEntity
8
0
100
AcDbEllipse
10
8.193371416945673
20
6.584439091463058
30
0.0
11
0.0
21
0.9445114593901811
31
0.0
210
0.0
220
0.0
230
1.0
40
0.9770115006281081
41
3.141592653589792
42
4.712388980384688

Group code 5 (tag <5, D3>)defines an entity handle (text string with up to 16 hexadecimal digits) and must be unique for each entity in the DXF file. So you need to keep track of all the handles used in your DXF file (the STYLE entity uses group code 105 for the handle and is the only exception).
AutoCAD is happy when you write the next available handle to the HEADER variable $HANDSEED, but does not require a valid entry to open the DXF file.
See also: DXF Reference provided by Autodesk

Related

Repeat certain pandas series values, so that it has an entry for all index values between 1 and 100

I have created a list of pandas series, with each series indexed by numbers between 1 and 100 eg
Index Value
1 62.99
4 64.39
37 75.225
65 88.12
74 89.89
79 93.30
88 94.30
92 95.83
100 100.00
What I want to do, either while it is a Series, or as an array after calling .to_numpy() on it, is to fill it out so that my series has 100 values (1 to 100), with any new entries having the previous existing value ie
Index Value
1 62.99
2 62.99
3 62.99
4 64.39
5 64.39
6 64.39
...
...
36 64.39
37 75.225
38 75.225
and so on.
I can do this programmatically the long-winded way by iterating through each series and checking for a change in value; my question is, is there a version of Series.repeat() which could do this in one hit, or a numpy function which can 'pad out' my array in this manner with my 100 values?
Thanks in advance for reading, and for any suggestions. This isn't homework; it's a genuine question so please don't attack me if my style of asking isn't as you expect.
What you need yo do is to frontfill the values in a series:
This code
series = pd.Series([33.2, 36, 39, 55], index=[3, 6, 12, 14], name='series')
indices = range(100)
df = pd.DataFrame(indices)
series = df.join(series).ffill()['series']
produces
0 NaN
1 NaN
2 NaN
3 33.2
4 33.2
...
95 55.0
96 55.0
97 55.0
98 55.0
99 55.0
First values ar NaN because there are no values to fill them in the series
So here's the solution I went with - an ffill() with fillna(0), joining to range(1,101). I had to iterate through a larger dataset which needed grouping by ID first / taking the maximum 'Pct' per 'Bucket' :-
j=df[['ID','Bucket','Pct']].groupby(['ID','Bucket']).max()
for i in df['ID'].unique():
index=pd.DataFrame(range(1,101))
index.columns=['Bucket']
k=pd.merge(index,j.loc[i],how='left',on='Bucket').ffill().fillna(0)
In:
Bucket Pct
3 0.03
3 0.1
3 0.26
3 0.42
3 0.45
3 0.59
3 0.69
3 0.83
3 0.86
3 0.91
3 0.94
3 0.98
4 1.1
... ...
91 98.89
93 99.08
94 99.17
94 99.26
94 99.43
94 99.48
94 99.63
100 100.0
Out:
Bucket Pct
1 0.00
2 0.00
3 0.98
4 1.83
5 22.83
... ...
91 98.89
92 98.89
93 99.08
94 99.63
95 99.63
96 99.63
97 99.63
98 99.63
99 99.63
100 100.00
Many, many thanks once again to you both!

Simple way of converting list from character to numeric in R?

I have looking into other threads on this problem and could not find an easy solution. I have imported data from Excel tables, and joined them in lists which generally look like this:
> Hemo
[[1]]
V1 V2 V3 V4 V5 V6 V7
1 0d 3d 6d 9d 12d 15d 18d
2 10 40 20 60 50 30 40
3 20 30 30 30 30 30 30
4 20 20 30 20 40 20 50
[[2]]
V1 V2 V3 V4 V5 V6 V7
1 0d 3d 6d 9d 12d 15d 18d
2 0 10 10 0 0 0 0
3 0 10 20 20 20 0 0
4 0 0 10 20 20 0 0
However I'd like them to look like this (which is an array):
, , 1
0d 3d 6d 9d 12d 15d 18d
V2 10 40 20 60 50 30 40
V3 20 30 30 30 30 30 30
V4 20 20 30 20 40 20 50
, , 2
0d 3d 6d 9d 12d 15d 18d
V2 0 10 10 0 0 0 0
V3 0 10 20 20 20 0 0
V4 0 0 10 20 20 0 0
In the first case all elements are characters and I am not being able to coerse them to numbers. Ultimately I'd like to convert the first list into the second array where the first imported line figures as the column names. There must be some package enabling this? Please let us find a simple workaround as I am a newbie. Thanks
It appears as though you imported the data from excel, but the columnnames were interpreted as data. You didn't specify which function you used to do the importing, but with most of them you can specify that the first row of data are columnnames.
library(readxl)
data <- read_excel(filename, col_names = TRUE)
When you import your data properly, it won't confuse the actual data, and should automatically read it as numerics. This way you won't have to convert it yourself.

comparing multiple column files using python3

input_file1:
a 1 33
a 34 67
a 68 78
b 1 99
b 100 140
c 1 70
c 71 100
c 101 190
input file2:
a 5 23
a 30 72
a 76 78
b 5 30
c 23 88
c 92 98
I want to compare these two files such that for every value of 'a' in file2 the two integers (boundary) fall in the range (boundaries) of 'a' in file1 or between two ranges.
Instead of storing values like this 'a 1 33', you can make one structure (like 'a:1:33') for your data while writing into file. So that it will become easy to read data also.
Then, you can read each line and can split it based on ':' separator and you can compare with another file easily.

Trouble reading a data set into R

I am new to R and I am trying to read in a data set. The data set is here:
http://petitlien.fr/myfiles
(The above link will expand to a GMX File storage folder link and click on Guest access to retrieve the file.)
The file named mydata.log has 32 entries with no header and it consists of 2 columns which are delimited by spaces.
I am trying the powerful command scan
test.frame<-scan(file="mydata.log",sep= "", nlines=32,blank.lines.skip=TRUE)
The above just read the first 3 rows:
head(test.frame)
[1] 0.0000 0.0000 144.3210 0.3400 159.4070 0.8925
I have tried also read.table:
test.frame<-read.table(file="mydata.log",sep= "", nrows=32,blank.lines.skip=TRUE)
This one reads the first 6 lines only as shown below:
names(test.frame)
[1] "V1" "V2"
> head(test.frame)
V1 V2
1 0.000 0.0000
2 144.321 0.3400
3 159.407 0.8925
4 198.413 0.9450
5 222.557 0.9975
6 235.464 1.0500
Does someone know how to read this data set properly?
A related question: Can I control the number of significant digits or perhaps decimal places in the data being read in?
Thanks a lot...
This line of your code works perfectly:
test.frame<-read.table(file="mydata.log",sep= "", nrows=32,blank.lines.skip=TRUE)
The reason why you only get 6 lines in your output is because you are using head. To view all lines, just enter the name of your object:
> test.frame
V1 V2
1 0.000 0.0000
2 144.321 0.3400
3 159.407 0.8925
4 198.413 0.9450
5 222.557 0.9975
6 235.464 1.0500
7 296.918 1.1025
8 346.773 1.1550
9 442.955 1.2075
10 694.879 1.2600
11 892.436 1.3125
12 1492.970 1.3650
13 2916.960 1.4175
14 3596.060 1.4700
15 5278.950 1.5225
16 7480.730 1.5750
17 12259.800 1.6275
18 14032.600 1.6800
19 19565.600 1.7325
20 31427.700 1.7850
21 58221.400 1.8375
22 92283.900 1.9900
23 165601.000 1.9425
24 165703.000 1.9950
25 213925.000 2.8750
26 260381.000 2.1000
27 312701.000 2.1525
28 370853.000 2.2050
29 479303.000 2.2575
30 487265.000 2.3100
31 545225.000 2.3625
32 703186.000 2.4150
Here is an easy way to see how many rows you have (useful when you have many observations):
nrow(test.frame)
[1] 32
As for the number of digits, see the round command. To look at the documentation for a command, enter a ? and then the command, in this case a function: ?round
#note that you do not have to put "digits=2", you can just put "2", but this way is clearer
> rounded_test.frame <- round(test.frame, digits=2)
> rounded_test.frame
V1 V2
1 0.00 0.00
2 144.32 0.34
3 159.41 0.89
4 198.41 0.94
5 222.56 1.00
6 235.46 1.05
7 296.92 1.10
8 346.77 1.16
9 442.95 1.21
10 694.88 1.26
11 892.44 1.31
12 1492.97 1.36
13 2916.96 1.42
14 3596.06 1.47
15 5278.95 1.52
16 7480.73 1.57
17 12259.80 1.63
18 14032.60 1.68
19 19565.60 1.73
20 31427.70 1.78
21 58221.40 1.84
22 92283.90 1.99
23 165601.00 1.94
24 165703.00 2.00
25 213925.00 2.88
26 260381.00 2.10
27 312701.00 2.15
28 370853.00 2.21
29 479303.00 2.26
30 487265.00 2.31
31 545225.00 2.36
32 703186.00 2.42
Note in the above I created a new object instead of replacing the current one. If you want to replace the current one and lose the data forever (until you reload the dataset of course!), then you can use this line instead:
test.frame <- round(test.frame, digits=2)
If you don't really want to compress your numbers, you might just be interested in viewing the rounded numbers. You can do this the following command:
print(test.frame,digits=2)
Instead of nrow() as suggested, I would recommend str() ("structure") that gives you more useful information about your data set (class of variables etc). It's also a bit less cryptic....:)

How do I gaussian blur an image without using any in-built gaussian functions?

I want to blur my image using the native Gaussian blur formula. I read the Wikipedia article, but I am not sure how to implement this.
How do I use the formula to decide weights?
I do not want to use any built in functions like what MATLAB has
Writing a naive gaussian blur is actually pretty easy. It is done in exactly the same way as any other convolution filter. The only difference between a box and a gaussian filter is the matrix you use.
Imagine you have an image defined as follows:
0 1 2 3 4 5 6 7 8 9
10 11 12 13 14 15 16 17 18 19
20 21 22 23 24 25 26 27 28 29
30 31 32 33 34 35 36 37 38 39
40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59
60 61 62 63 64 65 66 67 68 69
70 71 72 73 74 75 76 77 78 79
80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99
A 3x3 box filter matrix is defined as follows:
0.111 0.111 0.111
0.111 0.111 0.111
0.111 0.111 0.111
To apply the gaussian blur you would do the following:
For pixel 11 you would need to load pixels 0, 1, 2, 10, 11, 12, 20, 21, 22.
you would then multiply pixel 0 by the upper left portion of the 3x3 blur filter. Pixel 1 by the top middle, pixel 2, pixel 3 by top right, pixel 10 by middle left and so on.
Then add them altogether and write the result to pixel 11. As you can see Pixel 11 is now the average of itself and the surrounding pixels.
Edge cases do get a bit more complex. What values do you use for the values of the edge of the texture? One way can be to wrap round to the other side. This looks good for an image that is later tiled. Another way is to push the pixel into the surrounding places.
So for upper left you might place the samples as follows:
0 0 1
0 0 1
10 10 11
I hope you can see how this can easily be extended to large filter kernels (ie 5x5 or 9x9 etc).
The difference between a gaussian filter and a box filter is the numbers that go in the matrix. A gaussian filter uses a gaussian distribution across a row and column.
e.g for a filter defined arbitrarily as (ie this isn't a gaussian, but probably not far off)
0.1 0.8 0.1
the first column would be the same but multiplied into the first item of the row above.
0.01 0.8 0.1
0.08
0.01
The second column would be the same but the values would be multiplied by the 0.8 in the row above (and so on).
0.01 0.08 0.01
0.08 0.64 0.08
0.01 0.08 0.01
The result of adding all of the above together should equal 1. The difference between the above filter and the original box filter would be that the end pixel written would have a much heavier weighting towards the central pixel (ie the one that is in that position already). The blur occurs because the surrounding pixels do blur into that pixel, though not as much. Using this sort of filter you get a blur but one that doesn't destroy as much of the high frequency (ie rapid changing of colour from pixel to pixel) information.
These sort of filters can do lots of interesting things. You can do an edge detect using this sort of filter by subtracting the surrounding pixels from the current pixel. This will leave only the really big changes in colour (high frequencies) behind.
Edit: A 5x5 filter kernel is define exactly as above.
e.g if your row is 0.1 0.2 0.4 0.2 0.1 then if you multiply each value in their by the first item to form a column and then multiply each by the second item to form the second column and so on you'll end up with a filter of
0.01 0.02 0.04 0.02 0.01
0.02 0.04 0.08 0.04 0.02
0.04 0.08 0.16 0.08 0.04
0.02 0.04 0.08 0.04 0.02
0.01 0.02 0.04 0.02 0.01
taking some arbitrary positions you can see that position 0, 0 is simple 0.1 * 0.1. Position 0, 2 is 0.1 * 0.4, position 2, 2 is 0.4 * 0.4 and position 1, 2 is 0.2 * 0.4.
I hope that gives you a good enough explanation.
Here's the pseudo-code for the code I used in C# to calculate the kernel. I do not dare say that I treat the end-conditions correctly, though:
double[] kernel = new double[radius * 2 + 1];
double twoRadiusSquaredRecip = 1.0 / (2.0 * radius * radius);
double sqrtTwoPiTimesRadiusRecip = 1.0 / (sqrt(2.0 * Math.PI) * radius);
double radiusModifier = 1.0;
int r = -radius;
for (int i = 0; i < kernel.Length; i++)
{
double x = r * radiusModifier;
x *= x;
kernel[i] = sqrtTwoPiTimesRadiusRecip * Exp(-x * twoRadiusSquaredRecip);
r++;
}
double div = Sum(kernel);
for (int i = 0; i < kernel.Length; i++)
{
kernel[i] /= div;
}
Hope this helps.
To use the filter kernel discussed in the Wikipedia article you need to implement (discrete) convolution. The idea is that you have a small matrix of values (the kernel), you move this kernel from pixel to pixel in the image (i.e. so that the center of the matrix is on the pixel), multiply the matrix elements with the overlapped image elements, sum all the values in the result and replace the old pixel value with this sum.
Gaussian blur can be separated into two 1D convolutions (one vertical and one horizontal) instead of a 2D convolution, which also speeds things up a bit.
I am not clear whether you want to restrict this to certain technologies, but if not SVG (ScalableVectorGraphics) has an implementation of Gaussian Blur. I believe it applies to all primitives including pixels. SVG has the advantage of being an Open standard and widely implemented.
Well, Gaussian Kernel is a separable kernel.
Hence all you need is a function which supports Separable 2D Convolution like - ImageConvolutionSeparableKernel().
Once you have it, all needed is a wrapper to generate 1D Gaussian Kernel and send it to the function as done in ImageConvolutionGaussianKernel().
The code is a straight forward C implementation of 2D Image Convolution accelerated by SIMD (SSE) and Multi Threading (OpenMP).
The whole project is given by - Image Convolution - GitHub.

Resources