Shapefile quadtree in shapelib - c

In shapelib, I've noted that quite amount of code is meant to handle Shapefile quadtree. For instance, the tool shptreedump (in shapelib source code).
warmerda#gdal[207]% shptreedump -maxdepth 6 eg_data/polygon.shp (
SHPTreeNode Min = (471127.19,4751545.00) Max =
(489292.31,4765610.50) Shapes(0): ( SHPTreeNode
Min = (471127.19,4751545.00)
Max = (481118.01,4765610.50)
Shapes(0):
( SHPTreeNode
Min = (471127.19,4751545.00)
Max = (481118.01,4759281.03)
Shapes(0):
( SHPTreeNode
Min = (471127.19,4751545.00)
Max = (476622.14,4759281.03)
Shapes(0):
( SHPTreeNode
Min = (471127.19,4751545.00)
Max = (476622.14,4755799.81)
Shapes(0):
( SHPTreeNode
Min = (471127.19,4751545.00)
Max = (474149.41,4755799.81)
Shapes(6): 395 397 402 404 405 422
)
( SHPTreeNode
Min = (473599.92,4751545.00)
Max = (476622.14,4755799.81)
Shapes(10): 392 394 403 413 414 417 426 433 434 447
)
) ...
I think I've been quite familiar with shapefile format after reading the ESRI Shapefile Technical Description. But I can't see any internal tree structure itself. So my question is, what is the shapefile quadtree for? And if possible, with explanation of shapefile quadtree implementation.
Thanks.

If you look at the end of your quoted text, right where you stopped, lots of closing parenthesis...good old Lisp style encoding:
(R (st1 (st21 () () () ()) () () ()) (st2) (st3) (st4))
R stands for the root of the tree, then you have four subtrees in () plus the actual data st1, I denoted the 4 subtrees by st1...st4. st21 stands for the first subtree on the second level. The subtrees could be labeled, or if any of them empty denoted by (). It is easy to parse and print.

Related

Getting No such file or directory [[{{node ReadFile}}]] [[IteratorGetNext]] [Op:__inference_train_function_9137] error

This may be a simple answer, but currently making a neural network using keras and I ran into this problem through this code
\`EPOCHS = 50
callbacks = \[
tf.keras.callbacks.ReduceLROnPlateau(
monitor='val_loss', factor=0.1, patience=10, verbose=1, mode='min', min_delta=0.0001),
tf.keras.callbacks.ModelCheckpoint(
'weights.tf', monitor='val_loss', verbose=1, save_best_only=True, save_weights_only=True),
tf.keras.callbacks.EarlyStopping(
monitor='val_loss', min_delta=0, patience=15, verbose=1, restore_best_weights=True)
\]
history = model.fit(
train_ds,
validation_data=val_ds,
verbose=1,
callbacks=callbacks,
epochs=EPOCHS,
)
model.load_weights('weights.tf')
model.evaluate(val_ds)\`
Output:
`Epoch 1/50
NotFoundError Traceback (most recent call last)
\<ipython-input-15-265d39d703c7\> in \<module\>
10 \]
11
\---\> 12 history = model.fit(
13 train_ds,
14 validation_data=val_ds,
1 frames
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
52 try:
53 ctx.ensure_initialized()
\---\> 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx.\_handle, device_name, op_name,
55 inputs, attrs, num_outputs)
56 except core.\_NotOkStatusException as e:
NotFoundError: Graph execution error:
train/60377.jpg; No such file or directory
\[\[{{node ReadFile}}\]\]
\[\[IteratorGetNext\]\] \[Op:\__inference_train_function_9137\]
`
Here's my data:
FairFace Dataset from Kaggle
Here's how I preprocessed (through code I borrowed) the images from the FairFace dataset.
\`IMG_SIZE = 224
AUTOTUNE = tf.data.AUTOTUNE
BATCH_SIZE = 224
NUM_CLASSES = len(labels_map)
# Dataset creation
y_train = tf.keras.utils.to_categorical(train.race, num_classes=NUM_CLASSES, dtype='float32')
y_val = tf.keras.utils.to_categorical(val.race, num_classes=NUM_CLASSES, dtype='float32')
train_ds = tf.data.Dataset.from_tensor_slices((train.file, y_train)).shuffle(len(y_train))
val_ds = tf.data.Dataset.from_tensor_slices((val.file, y_val))
assert len(train_ds) == len(train.file) == len(train.race)
assert len(val_ds) == len(val.file) == len(val.race)
# Read files
def map_fn(path, label):
image = tf.io.decode_jpeg(tf.io.read_file(path))
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
# Read files
train_ds = train_ds.map(lambda path, lbl: (tf.io.decode_jpeg(tf.io.read_file(path)), lbl), num_parallel_calls=AUTOTUNE)
val_ds = val_ds.map(lambda path, lbl: (tf.io.decode_jpeg(tf.io.read_file(path)), lbl), num_parallel_calls=AUTOTUNE)
# Batch and resize after batch, then prefetch
train_ds = val_ds.map(lambda imgs, lbls: (tf.image.resize(imgs, (IMG_SIZE, IMG_SIZE)), lbls), num_parallel_calls=AUTOTUNE)
val_ds = val_ds.map(lambda imgs, lbls: (tf.image.resize(imgs, (IMG_SIZE, IMG_SIZE)), lbls), num_parallel_calls=AUTOTUNE)
train_ds = train_ds.batch(BATCH_SIZE)
val_ds = val_ds.batch(BATCH_SIZE)
# Performance enchancement - cache, batch, prefetch
train_ds = train_ds.prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.prefetch(buffer_size=AUTOTUNE)\`
I tried changing the jpg file name but to no avail.

MariaDB storage increase

linux and db's nameswe faced the problem with mariadb storage.
we have centos 8 with monitoring system and mariadb as database system. while creating the linux machine we gave it 120 gb, but after adding the nodes to monitoring system the space was fulled. so we increase the linux root space by 50 gb.
but the problem is still exist:
mariadb service didn't work, so we have to restart it
we need to restart nginx service for web access.
after this 2 manipulations monitoring system works for a 5-6 hours, then we again have to restart DB and web service.
We think that database didn't use all 170GB and 'see' only the initial 120GB. As test we delete approximately 15 devices (+-15GB) from monitoring system and test it for a 5 days and there was not any DB or WEB issues.
MariaDB - 10.3.28 version
The used engine is InnoDB
We checked for innodb_page_size
Innodb_page_size = 16384
Could someone help us
innodb status1
innodb status2
Analysis of GLOBAL STATUS and VARIABLES:
Observations:
Version: 10.3.28-MariaDB
3.6 GB of RAM
Uptime = 6d 22:26:28
429 Queries/sec : 145 Questions/sec
The More Important Issues:
Do you have 3.6GB of RAM? Are you using InnoDB? If yes to both of those, then make these two changes; they may help performance a lot:
key_buffer_size = 40M
innodb_buffer_pool_size = 2G
I'm getting conflicting advice on table_open_cache; let's leave it alone for now.
If you have SSD, I recommend these two:
innodb_io_capicity = 1000
innodb_flush_neighbors = 0
innodb_log_file_size = 200M -- Caution: It may be complex to change this in the version you are running. If so, leave it alone.
You seem to DELETE far more rows than you INSERT; what is going on?
Unless you do a lot of ALTERs, make this change:
myisam_sort_buffer_size = 50M
Details and other observations:
( (key_buffer_size - 1.2 * Key_blocks_used * 1024) ) = ((128M - 1.2 * 0 * 1024)) / 3865470566.4 = 3.5% -- Percent of RAM wasted in key_buffer.
-- Decrease key_buffer_size (now 134217728).
( Key_blocks_used * 1024 / key_buffer_size ) = 0 * 1024 / 128M = 0 -- Percent of key_buffer used. High-water-mark.
-- Lower key_buffer_size (now 134217728) to avoid unnecessary memory usage.
( (key_buffer_size / 0.20 + innodb_buffer_pool_size / 0.70) ) = ((128M / 0.20 + 128M / 0.70)) / 3865470566.4 = 22.3% -- Most of available ram should be made available for caching.
-- http://mysql.rjweb.org/doc.php/memory
( Key_reads + Key_writes + Innodb_pages_read + Innodb_pages_written + Innodb_dblwr_writes + Innodb_buffer_pool_pages_flushed ) = (0 + 0 + 107837817 + 14228075 + 669027 + 14217155) / 599188 = 228 /sec -- IOPs?
-- If the hardware can handle it, set innodb_io_capacity (now 200) to about this value.
( ( Key_reads + Key_writes + Innodb_pages_read + Innodb_pages_written + Innodb_dblwr_writes + Innodb_buffer_pool_pages_flushed ) / innodb_io_capacity / Uptime ) = ( 0 + 0 + 107837817 + 14228075 + 669027 + 14217155 ) / 200 / 599188 = 114.3% -- This may be a metric indicating what innodb_io_capacity is set reasonably.
-- Increase innodb_io_capacity (now 200) if the hardware can handle it.
( Table_open_cache_misses ) = 12,156,771 / 599188 = 20 /sec
-- May need to increase table_open_cache (now 2000)
( Table_open_cache_misses / (Table_open_cache_hits + Table_open_cache_misses) ) = 12,156,771 / (184539214 + 12156771) = 6.2% -- Effectiveness of table_open_cache.
-- Increase table_open_cache (now 2000) and check table_open_cache_instances (now 8).
( innodb_buffer_pool_size ) = 128M -- InnoDB Data + Index cache
-- 128M (an old default) is woefully small.
( innodb_buffer_pool_size ) = 128 / 3865470566.4 = 3.5% -- % of RAM used for InnoDB buffer_pool
-- Set to about 70% of available RAM. (To low is less efficient; too high risks swapping.)
( (key_buffer_size / 0.20 + innodb_buffer_pool_size / 0.70) ) = ((128M / 0.20 + 128M / 0.70)) / 3865470566.4 = 22.3% -- (metric for judging RAM usage)
( innodb_lru_scan_depth ) = 1,024
-- "InnoDB: page_cleaner: 1000ms intended loop took ..." may be fixed by lowering lru_scan_depth
( innodb_io_capacity ) = 200 -- When flushing, use this many IOPs.
-- Reads could be slugghish or spiky.
( innodb_io_capacity_max / innodb_io_capacity ) = 2,000 / 200 = 10 -- Capacity: max/plain
-- Recommend 2. Max should be about equal to the IOPs your I/O subsystem can handle. (If the drive type is unknown 2000/200 may be a reasonable pair.)
( Innodb_log_writes ) = 33,145,091 / 599188 = 55 /sec
( Innodb_os_log_written / (Uptime / 3600) / innodb_log_files_in_group / innodb_log_file_size ) = 67,002,682,368 / (599188 / 3600) / 2 / 48M = 4 -- Ratio
-- (see minutes)
( Uptime / 60 * innodb_log_file_size / Innodb_os_log_written ) = 599,188 / 60 * 48M / 67002682368 = 7.5 -- Minutes between InnoDB log rotations Beginning with 5.6.8, this can be changed dynamically; be sure to also change my.cnf.
-- (The recommendation of 60 minutes between rotations is somewhat arbitrary.) Adjust innodb_log_file_size (now 50331648). (Cannot change in AWS.)
( innodb_flush_method ) = innodb_flush_method = fsync -- How InnoDB should ask the OS to write blocks. Suggest O_DIRECT or O_ALL_DIRECT (Percona) to avoid double buffering. (At least for Unix.) See chrischandler for caveat about O_ALL_DIRECT
( default_tmp_storage_engine ) = default_tmp_storage_engine =
( innodb_flush_neighbors ) = 1 -- A minor optimization when writing blocks to disk.
-- Use 0 for SSD drives; 1 for HDD.
( ( Innodb_pages_read + Innodb_pages_written ) / Uptime / innodb_io_capacity ) = ( 107837817 + 14228075 ) / 599188 / 200 = 101.9% -- If > 100%, need more io_capacity.
-- Increase innodb_io_capacity (now 200) if the drives can handle it.
( innodb_io_capacity ) = 200 -- I/O ops per second capable on disk . 100 for slow drives; 200 for spinning drives; 1000-2000 for SSDs; multiply by RAID factor.
( sync_binlog ) = 0 -- Use 1 for added security, at some cost of I/O =1 may lead to lots of "query end"; =0 may lead to "binlog at impossible position" and lose transactions in a crash, but is faster. 0 is OK for Galera.
( innodb_adaptive_hash_index ) = innodb_adaptive_hash_index = ON -- Usually should be ON.
-- There are cases where OFF is better. See also innodb_adaptive_hash_index_parts (now 8) (after 5.7.9) and innodb_adaptive_hash_index_partitions (MariaDB and Percona). ON has been implicated in rare crashes (bug 73890). 10.5.0 decided to default OFF.
( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF -- Whether to log all Deadlocks.
-- If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.
( myisam_sort_buffer_size ) = 134,216,704 / 3865470566.4 = 3.5% -- Used for ALTER, CREATE INDEX, OPTIMIZE, LOAD DATA; set when you need it. Also for MyISAM's REPAIR TABLE.
-- Decrease myisam_sort_buffer_size (now 134216704) to keep from blowing out RAM.
( innodb_ft_result_cache_limit ) = 2,000,000,000 / 3865470566.4 = 51.7% -- Byte limit on FULLTEXT resultset. (Possibly not preallocated, but grows?)
-- Lower the setting.
( innodb_autoextend_increment * 1048576 ) = (64 * 1048576) / 3865470566.4 = 1.7% -- How much to increase ibdata1 by (when needed).
-- Decrease setting to avoid premature swapping.
( character_set_server ) = character_set_server = latin1
-- Charset problems may be helped by setting character_set_server (now latin1) to utf8mb4. That is the future default.
( local_infile ) = local_infile = ON
-- local_infile (now ON) = ON is a potential security issue
( Created_tmp_disk_tables / Created_tmp_tables ) = 542,381 / 1084764 = 50.0% -- Percent of temp tables that spilled to disk
-- Maybe increase tmp_table_size (now 16777216) and max_heap_table_size (now 16777216); improve indexes; avoid blobs, etc.
( Com_delete / Com_insert ) = 2,294,352 / 1521534 = 150.8% -- Deletes / Inserts (as a pct). (Ignores LOAD, REPLACE, etc.)
( Com_insert + Com_delete + Com_delete_multi + Com_replace + Com_update + Com_update_multi ) = (1521534 + 2294352 + 21366 + 0 + 45590666 + 0) / 599188 = 82 /sec -- writes/sec
-- 50 writes/sec + log flushes will probably max out I/O write capacity of normal drives
( Com__biggest ) = Com__biggest = Com_stmt_execute -- Which of the "Com_" metrics is biggest.
-- Normally it is Com_select (now 34545111). If something else, then it may be a sloppy platform, or may be something else.
( binlog_format ) = binlog_format = MIXED -- STATEMENT/ROW/MIXED.
-- ROW is preferred by 5.7 (10.3)
( slow_query_log ) = slow_query_log = OFF -- Whether to log slow queries. (5.1.12)
( long_query_time ) = 10 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2
( back_log ) = 80 -- (Autosized as of 5.6.6; based on max_connections)
-- Raising to min(150, max_connections (now 151)) may help when doing lots of connections.
( thread_cache_size / Max_used_connections ) = 151 / 51 = 296.1%
-- There is no advantage in having the thread cache bigger than your likely number of connections. Wasting space is the disadvantage.
You have the Query Cache half-off. You should set both query_cache_type = OFF and query_cache_size = 0 . There is (according to a rumor) a 'bug' in the QC code that leaves some code on unless you turn off both of those settings.
Abnormally small:
(Com_select + Qcache_hits) / (Com_insert + Com_update + Com_delete + Com_replace) = 0.699
Com_show_tables = 0
Innodb_buffer_pool_bytes_data = 193 /sec
Table_locks_immediate = 0.53 /HR
eq_range_index_dive_limit = 0
innodb_spin_wait_delay = 4
Abnormally large:
Com_delete_multi = 0.036 /sec
Com_stmt_close = 141 /sec
Com_stmt_execute = 141 /sec
Com_stmt_prepare = 141 /sec
Handler_discover = 0.94 /HR
Innodb_buffer_pool_read_ahead = 162 /sec
Innodb_buffer_pool_reads * innodb_page_size / innodb_buffer_pool_size = 114837.2%
Innodb_data_pending_fsyncs = 2
Innodb_os_log_fsyncs = 55 /sec
Opened_plugin_libraries = 0.006 /HR
Table_open_cache_active_instances = 4
Tc_log_page_size = 4,096
Abnormal strings:
aria_recover_options = BACKUP,QUICK
innodb_fast_shutdown = 1
log_slow_admin_statements = ON
myisam_stats_method = NULLS_UNEQUAL
old_alter_table = DEFAULT

Complex number in Fenics

I am currently trying to solve a complex-valued PDE with Fenics in a jupyter notebook but I am having trouble when I try to use a complex number in Fenics.
Here is how I've defined the variational problem:
u = TrialFunction(V)
v = TestFunction(V)
a = (inner(grad(u[0]), grad(v[0])) + inner(grad(u[1]), grad(v[1])))*dx + sin(lat)*(u[0]*v[1]-u[1]*v[0])*dx+1j*((-inner(grad(u[0]), grad(v[1])) + inner(grad(u[1]), grad(v[0])))*dx + (sin(lat)*(u[0]*v[0]-u[1]*v[1])*dx))
f = Constant((1.0,1.0))
b = (v[0]*f[0]+f[1]*v[1])*ds+1j*((f[1]*v[0]-f[0]*v[1])*ds)
I got the following error message:
AttributeError Traceback (most recent call last)
<ipython-input-74-7760afa5a395> in <module>()
1 u = TrialFunction(V)
2 v = TestFunction(V)
----> 3 a = (inner(grad(u[0]), grad(v[0])) + inner(grad(u[1]), grad(v[1])))*dx + sin(lat)*(u[0]*v[1]-u[1]*v[0])*dx+1j*((-inner(grad(u[0]), grad(v[1])) + inner(grad(u[1]), grad(v[0])))*dx + (sin(lat)*(u[0]*v[0]-u[1]*v[1])*dx)
4 f = Constant((0.0,0.0))
5 b = (v[0]*f[0]+f[1]*v[1])*ds+1j*((f[1]*v[0]-f[0]*v[1])*ds)
~/anaconda3_420/lib/python3.5/site-packages/ufl/form.py in __rmul__(self, scalar)
305 "Multiply all integrals in form with constant scalar value."
306 # This enables the handy "0*form" or "dt*form" syntax
--> 307 if is_scalar_constant_expression(scalar):
308 return Form([scalar*itg for itg in self.integrals()])
309 return NotImplemented
~/anaconda3_420/lib/python3.5/site-packages/ufl/checks.py in is_scalar_constant_expression(expr)
84 if is_python_scalar(expr):
85 return True
---> 86 if expr.ufl_shape:
87 return False
88 return is_globally_constant(expr)
AttributeError: 'complex' object has no attribute 'ufl_shape'
Could someone please help me?
By the way, Fenics might not be the best tool to solve complex-valued PDE and I would like to read your suggestions about such problems.

MAX Partitioning in ETL using SSIS

I have a query in SQL Server where I am using MAX OVER Partition BY.
MAX(Duration) OVER (PARTITION BY [Variable8]
ORDER BY [RouterCallKeySequenceNumber] ASC) AS MaxDuration
I would like to implement this in ETL using SSIS.
To implement, I have tried to implement similar to how we can implement Row Number.
I have added a SORT transformation and sorted by Variable8 and RouterCallKeySequenceNumber and then I have added a Script transformation.
string _variable8 = "";
int _max_duration;
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
_max_duration = Row.Duration;
if (Row.Variable8 != _variable8)
{
_max_duration = Row.Duration;
Row.maxduration = _max_duration;
_variable8 = Row.Variable8;
}
else
{
if (Row.Duration >= _max_duration)
{
Row.maxduration = _max_duration;
}
}
}
This is the data that I have -
Variable8 RouterCallKeySequenceNumber Duration
153084-2490 0 265
153084-2490 1 161
153084-2490 2 197
The solution that I need is as below -
Variable8 RouterCallKeySequenceNumber Duration Max Duration
153084-2490 0 265 265
153084-2490 1 161 265
153084-2490 2 197 265
But this does not return the desired value.
I would appreciate if you can provide any help.
Thanks

extjs problem calculation of rows

i have a table
Fields
class 1
class 2
class 3
class 4
a1
10
240
340
401
a2
12
270
340
405
a3
12
270
340
405
a4
15
270
360
405
a5
17
720
530
450
i have this in grid as well as in Json.store , what i have to do is perform mathematical calculation each time the grid is refreshed by "table name".reconfigure(..... , ....)
consider the column "class1" ,
value(a5) = ( value(a1)+ 2*value(a2) + 3*value(a3) ) /value(a4)
can anybody please help he on this problem ,
I will be very very Thankful for help :)
As I'm not sure what aspect of the problem you are having difficulty with, I'll address both at a high level.
Generally speaking you want to have your reconfigure method update the Ext Store, which will then trigger an event that the Grid should handle. Basically, change the Store and your Grid will be updated automatically.
As far as generating the correct new row... it seems fairly straightforward - a rough pass:
/*for each field foo_X through foo_N:*/
var lastElementIndex = store.data.size-1;
var total = 0;
for (var i=0; i<; i++) {
if (i != lastElementIndex) {
total += store.data[i].get(foo_X)*i;
} else {
total = total/store.data[i].get(foo_x);
}
}
/*construct your json object with the field foo*/
/*after looping through all your fields, create your record and add it to the Store*/

Resources