DBCC SHRINKFILE failing - sql-server

I'm trying to shrink a database on a test system but the file won't get any smaller. The size of the mdf file is 47 GB and unused space is 38 GB. A lot of data was removed from the database hence the large amount of unused space.
The following error appears:
Start dbcc shrinkfile ( Olympus_dat, 46912 ) at 2015-07-23 15:27:19.300
DBCC SHRINKFILE: Page 1:6017543 could not be moved because it has not been formatted.
How can I fix this error?
SQL Server 2012 x64
Additional information.
The following queries return an error:
DBCC TRACEON(3604)
GO
DBCC page('Olympus', 1, 6017543, 1)
Go
DBCC TRACEOFF(3604)
GO
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
PAGE: (14440:908631589)
BUFFER:
BUF #0x0000000009682040
bpage = 0x00000002B46B8000 bhash = 0x0000000000000000 bpageno = (1:6017543)
bdbid = 6 breferences = 1 bcputicks = 0
bsampleCount = 0 bUse1 = 23548 bstat = 0x809
blog = 0x5adb215a bnext = 0x0000000000000000
PAGE HEADER:
Page #0x00000002B46B8000
m_pageId = (14440:908631589) m_headerVersion = 252 m_type = 226
m_typeFlagBits = 0xf9 m_level = 216 m_flagBits = 0xd676
m_objId (AllocUnitId.idObj) = -1238914908 m_indexId (AllocUnitId.idInd) = 23941
Metadata: AllocUnitId = 6738992698879115264 Metadata: PartitionId = 0
Metadata: IndexId = -1 Metadata: ObjectId = 0 m_prevPage = (35510:854211095)
m_nextPage = (61606:1041616947) pminlen = 43990 m_slotCnt = 27900
m_freeCnt = 40464 m_freeData = 34288 m_reservedCnt = 12643
m_lsn = (-257029635:1920476993:30788) m_xactReserved = 11969
m_xdesId = (25449:1820050307) m_ghostRecCnt = 61532 m_tornBits = -213551362
DB Frag ID = 1
Allocation Status
GAM (1:5623552) = ALLOCATED SGAM (1:5623553) = ALLOCATED
PFS (1:6017472) = 0x40 ALLOCATED 0_PCT_FULL DIFF (1:5623558) = NOT CHANGED
ML (1:5623559) = NOT MIN_LOGGED
CompressionInfo #0x00000000309682B0
CompressionInfo Raw Bytes
CompressionInfo size (in bytes) = 0 PageModCount = 50289 CI Header Flags =
DATA:
Slot 0, Offset 0x4ad2, Length 1, DumpStyle BYTE
Record Type = (COMPRESSED) EMPTY_GHOST_RECORD Record size = 1
CD Array
Record Memory Dump
000000003096EAD2: 05 .
Msg 0, Level 11, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.

Please follow the steps given in the answer at the bottom of the page,
https://ask.sqlservercentral.com/questions/19676/dbcc-shrinkfile-error.html

Related

Getting No such file or directory [[{{node ReadFile}}]] [[IteratorGetNext]] [Op:__inference_train_function_9137] error

This may be a simple answer, but currently making a neural network using keras and I ran into this problem through this code
\`EPOCHS = 50
callbacks = \[
tf.keras.callbacks.ReduceLROnPlateau(
monitor='val_loss', factor=0.1, patience=10, verbose=1, mode='min', min_delta=0.0001),
tf.keras.callbacks.ModelCheckpoint(
'weights.tf', monitor='val_loss', verbose=1, save_best_only=True, save_weights_only=True),
tf.keras.callbacks.EarlyStopping(
monitor='val_loss', min_delta=0, patience=15, verbose=1, restore_best_weights=True)
\]
history = model.fit(
train_ds,
validation_data=val_ds,
verbose=1,
callbacks=callbacks,
epochs=EPOCHS,
)
model.load_weights('weights.tf')
model.evaluate(val_ds)\`
Output:
`Epoch 1/50
NotFoundError Traceback (most recent call last)
\<ipython-input-15-265d39d703c7\> in \<module\>
10 \]
11
\---\> 12 history = model.fit(
13 train_ds,
14 validation_data=val_ds,
1 frames
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
52 try:
53 ctx.ensure_initialized()
\---\> 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx.\_handle, device_name, op_name,
55 inputs, attrs, num_outputs)
56 except core.\_NotOkStatusException as e:
NotFoundError: Graph execution error:
train/60377.jpg; No such file or directory
\[\[{{node ReadFile}}\]\]
\[\[IteratorGetNext\]\] \[Op:\__inference_train_function_9137\]
`
Here's my data:
FairFace Dataset from Kaggle
Here's how I preprocessed (through code I borrowed) the images from the FairFace dataset.
\`IMG_SIZE = 224
AUTOTUNE = tf.data.AUTOTUNE
BATCH_SIZE = 224
NUM_CLASSES = len(labels_map)
# Dataset creation
y_train = tf.keras.utils.to_categorical(train.race, num_classes=NUM_CLASSES, dtype='float32')
y_val = tf.keras.utils.to_categorical(val.race, num_classes=NUM_CLASSES, dtype='float32')
train_ds = tf.data.Dataset.from_tensor_slices((train.file, y_train)).shuffle(len(y_train))
val_ds = tf.data.Dataset.from_tensor_slices((val.file, y_val))
assert len(train_ds) == len(train.file) == len(train.race)
assert len(val_ds) == len(val.file) == len(val.race)
# Read files
def map_fn(path, label):
image = tf.io.decode_jpeg(tf.io.read_file(path))
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
# Read files
train_ds = train_ds.map(lambda path, lbl: (tf.io.decode_jpeg(tf.io.read_file(path)), lbl), num_parallel_calls=AUTOTUNE)
val_ds = val_ds.map(lambda path, lbl: (tf.io.decode_jpeg(tf.io.read_file(path)), lbl), num_parallel_calls=AUTOTUNE)
# Batch and resize after batch, then prefetch
train_ds = val_ds.map(lambda imgs, lbls: (tf.image.resize(imgs, (IMG_SIZE, IMG_SIZE)), lbls), num_parallel_calls=AUTOTUNE)
val_ds = val_ds.map(lambda imgs, lbls: (tf.image.resize(imgs, (IMG_SIZE, IMG_SIZE)), lbls), num_parallel_calls=AUTOTUNE)
train_ds = train_ds.batch(BATCH_SIZE)
val_ds = val_ds.batch(BATCH_SIZE)
# Performance enchancement - cache, batch, prefetch
train_ds = train_ds.prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.prefetch(buffer_size=AUTOTUNE)\`
I tried changing the jpg file name but to no avail.

Get current Payara MaxHeapSize and MetaspaceSize

I have a running Payara 4 instance which I set the MaxHeapSize and MetaspaceSize as described here to be production ready. How can I check those values were correctly set?
You could check this using jmap -heap <pid> on the PID of the Payara process. jmap is contained in the JDK bin directory.
On JDK9+ you need to use jhsdb jmap --heap --pid <PID> to get the needed information.
The output should contain the needed information, e.g:
Heap Configuration:
MinHeapFreeRatio = 0
MaxHeapFreeRatio = 100
MaxHeapSize = 268435456 (256.0MB)
NewSize = 89128960 (85.0MB)
MaxNewSize = 89128960 (85.0MB)
OldSize = 179306496 (171.0MB)
NewRatio = 2
SurvivorRatio = 8
MetaspaceSize = 21807104 (20.796875MB)
CompressedClassSpaceSize = 1073741824 (1024.0MB)
MaxMetaspaceSize = 17592186044415 MB
G1HeapRegionSize = 0 (0.0MB)

Import data from MS SQL Server to HBase with Flume

I'm really new to Flume. I prefer Flume than Sqoop because data is continued to be imported to MS SQL Server in my case, therefore I think Flume is a better choice which is able to transfer data in real time.
I just followed some online example and then editing my own flume config file which tells something about the source, channel, and sink. However, it seemed that Flume didn't work successfully. There was no data being transferred to HBase.
mssql-hbase.conf
# source, channel, sink
agent1.sources = src1
agent1.channels = ch1
agent1.sinks = sk1
# declare source type
agent1.sources.src1.type = org.keedio.flume.source.SQLSource
agent1.sources.src1.hibernate.connection.url = jdbc:sqlserver://xx.xx.xx.xx:1433;DatabaseName=xxxx
agent1.sources.src1.hibernate.connection.user = xxxx
agent1.sources.src1.hibernate.connection.password = xxxx
agent1.sources.src1.table = xxxx
agent1.sources.src1.hibernate.connection.autocommit = true
# declare mysql hibernate dialect
agent1.sources.src1.hibernate.dialect = org.hibernate.dialect.SQLServerDialect
agent1.sources.src1.hibernate.connection.driver_class = com.microsoft.sqlserver.jdbc.SQLServerDriver
#agent1.sources.src1.hibernate.provider_class=org.hibernate.connection.C3P0ConnectionProvider
#agent1.sources.src1.columns.to.select = *
#agent1.sources.src1.incremental.column.name = PK, name, machine, time
#agent1.sources.src1.start.from=0
#agent1.sources.src1.incremental.value = 0
# query time interval
agent1.sources.src1.run.query.delay = 5000
# declare the folder loaction where flume state is saved
agent1.sources.src1.status.file.path = /home/user/flume-source-state
agent1.sources.src1.status.file.name = src1.status
agent1.sources.src1.batch.size = 1000
agent1.sources.src1.max.rows = 1000
agent1.sources.src1.delimiter.entry = |
# set the channel to memory mode
agent1.channels.ch1.type = memory
agent1.channels.ch1.capacity = 10000
agent1.channels.ch1.transactionCapacity = 10000
agent1.channels.ch1.byteCapacityBufferPercentage = 20
agent1.channels.ch1.byteCapacity = 800000
# declare sink type
agent1.sinks.sk1.type = org.apache.flume.sink.hbase.HBaseSink
agent1.sinks.sk1.table = yyyy
agent1.sinks.sk1.columnFamily = yyyy
agent1.sinks.sk1.hdfs.batchSize = 100
agent1.sinks.sk1.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
agent1.sinks.sk1.serializer.regex = ^\"(.*?)\",\"(.*?)\",\"(.*?)\"$
agent1.sinks.sk1.serializer.colNames = PK, name, machine, time
# bind source, channel, sink
agent1.sources.src1.channels = ch1
agent1.sinks.sk1.channel = ch1
But, I use a similar config file to transfer data from MySql to HBase. Luckily, it worked.
mysql-hbase.conf
# source, channel, sink
agent1.sources = src1
agent1.channels = ch1
agent1.sinks = sk1
# declare source type
agent1.sources.src1.type = org.keedio.flume.source.SQLSource
agent1.sources.src1.hibernate.connection.url = jdbc:mysql://xxxx:3306/userdb
agent1.sources.src1.hibernate.connection.user = xxxx
agent1.sources.src1.hibernate.connection.password = xxxx
agent1.sources.src1.table = xxxx
agent1.sources.src1.hibernate.connection.autocommit = true
# declare mysql hibernate dialect
agent1.sources.src1.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
agent1.sources.src1.hibernate.connection.driver_class = com.mysql.jdbc.Driver
#agent1.sources.src1.hibernate.provider_class=org.hibernate.connection.C3P0ConnectionProvider
#agent1.sources.src1.columns.to.select = *
#agent1.sources.src1.incremental.column.name = id
#agent1.sources.src1.incremental.value = 0
# query time interval
agent1.sources.src1.run.query.delay = 5000
# declare the folder loaction where flume state is saved
agent1.sources.src1.status.file.path = /home/user/flume-source-state
agent1.sources.src1.status.file.name = src1.status
#agent1.sources.src1.interceptors=i1
#agent1.sources.src1.interceptors.i1.type=search_replace
#agent1.sources.src1.interceptors.i1.searchPattern="
#agent1.sources.src1.interceptors.i1.replaceString=,
# Set the channel to memory mode
agent1.channels.ch1.type = memory
agent1.channels.ch1.capacity = 10000
agent1.channels.ch1.transactionCapacity = 10000
agent1.channels.ch1.byteCapacityBufferPercentage = 20
agent1.channels.ch1.byteCapacity = 800000
# declare sink type
agent1.sinks.sk1.type = org.apache.flume.sink.hbase.HBaseSink
agent1.sinks.sk1.table = user_test_2
agent1.sinks.sk1.columnFamily = user_hobby
agent1.sinks.sk1.hdfs.batchSize = 100
agent1.sinks.sk1.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
agent1.sinks.sk1.serializer.regex = ^\"(.*?)\",\"(.*?)\",\"(.*?)\",\"(.*?)\"$
agent1.sinks.sk1.serializer.colNames = id,name,age,hobby
# bind source, channel, sink
agent1.sources.src1.channels = ch1
agent1.sinks.sk1.channel = ch1
Does anyone know is there something wrong in the config file? Thanks.

Input file for a Fortran program, containing "&PROBIN"

I have the following code (it's an input file for a Fortran code from some link):
&PROBIN
model_file = "model_file"
drdxfac = 5
max_levs = 1
n_cellx = 106
n_celly = 106
n_cellz = 106
max_grid_size = 32
anelastic_cutoff = 1.e3
base_cutoff_density = 1.e3
sponge_center_density = 3.d6
sponge_start_factor = 3.333d0
sponge_kappa = 10.0d0
max_mg_bottom_nlevels = 3
mg_bottom_solver = 4
hg_bottom_solver = 4
spherical_in = 1
dm_in = 3
do_sponge = .true.
prob_hi_x = 2.e10
prob_hi_y = 2.e10
prob_hi_z = 2.e10
max_step = 100
init_iter = 1
stop_time = 30000.
plot_int = 10
plot_deltat = 10.0d0
chk_int = 100
cflfac = 0.7d0
init_shrink = 0.1d0
max_dt_growth = 1.1d0
use_soundspeed_firstdt = T
use_divu_firstdt = T
bcx_lo = 12
bcx_hi = 12
bcy_lo = 12
bcy_hi = 12
bcz_lo = 12
bcz_hi = 12
verbose = 1
mg_verbose = 1
cg_verbose = 1
do_initial_projection = T
init_divu_iter = 3
drive_initial_convection = T
stop_initial_convection = 20
do_burning = F
velpert_amplitude = 1.d6
velpert_radius = 2.d7
velpert_scale = 1.d7
velpert_steep = 1.d5
enthalpy_pred_type = 1
evolve_base_state = F
dpdt_factor = 0.0d0
use_tfromp = T
single_prec_plotfiles = T
use_eos_coulomb = T
plot_trac = F
/
My question is: what is &PROBIN? where can I find more information on it?
Such an input file would be typically read using namelist formatting.
Details can be found using this term. One example of use is given in this answer to a question about input.
In summary, the &PROBIN says that following (up to a terminating /) is a set of pairs for variables and values. These correspond to the namelist probin. In the source file we would find a namelist statement:
namelist /probin/ list, of, variables
with corresponding input statement
read(unit, NML=probin)
where the unit unit is connected to that input file.
Of course, it's entirely possible that the file is an input file processed in the "usual" way. In this case &PROBIN has no special significance. The &PROBIN is necessary to support namelist formatting, but not unique to it.

m_tornBits fields in sql server page

Every page in an mdf file(sql server) has a m_tornBits field in the page header.
Can anybody explain what this value means
here is an example of a page header : -
PAGE HEADER:
Page #0x1A198000
m_pageId = (1:135) m_headerVersion = 1 m_type = 1
m_typeFlagBits = 0x0 m_level = 0 m_flagBits = 0x2
m_objId = 3 m_indexId = 0 m_prevPage = (1:89)
m_nextPage = (0:0) pminlen = 46 m_slotCnt = 80
m_freeCnt = 2360 m_freeData = 7036 m_reservedCnt = 0
m_lsn = (8:213:7) m_xactReserved = 0 m_xdesId = (0:834)
m_ghostRecCnt = 0 m_tornBits = 822083793
here the tornbit field is 822083793
what does this mean?
From Technet: SQL Server 2000 I/O Basics
Torn I/O
Torn I/O is often referred to as a torn page in SQL Server documentation. A torn I/O occurs when a partial write takes place, leaving the data in an invalid state. SQL Server 2000/7.0 data pages are 8 KB in size. A torn data page for SQL Server occurs when only a portion of the 8 KB is correctly written to or retrieved from stable media.
m_tornBits contains the TORN or CHECKSUM validation value(s).
When the page is read from disk and PAGE_VERIFY protection is enabled for the database, the torn bits are audited.
You can find your answer here in this document (search for m_tornBits):
http://download.microsoft.com/download/4/7/a/47a548b9-249e-484c-abd7-29f31282b04d/SQLIOBasicsCh2.doc

Resources