ORA-00907: missing right parenthesis in CREATE DATABASE - database

I keep getting the same error while creating a database. It seems like something is wrong with the syntax. Here is how I create it:
CREATE DATABASE darkarmy
USER SYS IDENTIFIED BY admin
USER SYSTEM IDENTIFIED BY admin
LOGFILE GROUP 1 ('/u01/zdw07/logs/redo01a.log') SIZE 10M,
GROUP 2 ('/u01/zdw07/logs/redo02a.log') SIZE 10M,
GROUP 3 ('/u01/zdw07/logs/redo03a.log') SIZE 10M,
GROUP 4 ('/u01/zdw07/logs/redo04a.log’) SIZE 10M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 10
MAXDATAFILES 50
CHARACTER SET UTF8
NATIONAL CHARACTER SET UTF8
EXTENT MANAGEMENT LOCAL
DATAFILE '/u01/zdw07/darkarmy/node03/upike69.dbf' SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
SYSAUX DATAFILE '/u01/zdw07/darkarmy/node02/jap41.dbf' SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED,
'/u01/zdw07/darkarmy/node02/nud37.dbf' SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
DEFAULT TABLESPACE users
DATAFILE '/u01/zdw07/darkarmy/node03/arulice693.dbf'
SIZE 50M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE temp
TEMPFILE '/u01/zdw07/darkarmy/temp01.dbf' SIZE 100M REUSE
UNDO TABLESPACE undotbs1
DATAFILE '/u01/zdw07/darkarmy/undotbs1.dbf' SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
The error message:
'/u01/zdw07/darkarmy/node03/upike69.dbf' SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
*
ERROR at line 16:
ORA-00907: missing right parenthesis
I am new to Oracle, would be grateful for any help!

I think you have a problem here:
GROUP 4 ('/u01/zdw07/logs/redo04a.log’) SIZE 10M
The closing quote is ’ when it should be '

Related

Changing a partition with fdisk shows a warning like "partition#x contains ext4-signature"

I'm shrinking a partion size with
#Reduce Partition Size
fsck -f /dev/sdb2
resize2fs /dev/sdb2 -M -p
#Limit Partion
fdisk /dev/sdb
... #Now I'm changing the Partition 2 to the new (smaller) size
fdisk gives me a (red) warning like partition#2 contains ext4-signature (Partition #2 enthält eine ext4-Signatur)
Is there something wrong? Why does the fdisk show me a warning?
I tried to found the same.
So it means there're EXT4 fs on this partition.
For example, on partition and FS increasing this warning appears to. Just select "N" if you don't plan to remove fs.
Created a new partition 1 of type 'Linux' and of size 100 GiB.
Partition #1 contains an ext4 signature.
Do you want to remove the signature? [Y]es/[N]o: N
Most complete answer here: https://unix.stackexchange.com/questions/477991/what-is-a-vfat-signature/478001#478001

Database size MSSQL

I have two database: DB_1Large and DB_2Medium
DB_1Large.mdb have size 50 gb
DB_2Medium.mdb have size 16 gb
Problem:
Its look ridiculous, but backup of this databases have next sizes:
DB_1Large.bak - ~30 gb
DB_2Medium - ~12 gb
And after compress with win rar or studio this have next sizes:
DB_1Large.bak - ~2.5 gb
DB_2Medium - ~5.5 gb
Why and how can I make it smaller?
Exec sp_spaceused for both databases:
database_name database_size unallocated space
DB_1Large 52349.38 MB 20197.74 MB
reserved data index_size unused
30546184 KB 16273760 KB 13500336 KB 772088 KB
database_name database_size unallocated space
DB_2Medium 17144.19 MB 4672.13 MB
reserved data index_size unused
12457024 KB 10608232 KB 1809120 KB 39672 KB
Here COMPRESSION is Specified at the time backup,it will compress the database and take backup
Use master
Go
BACKUP DATABASE [DatabaseName]
TO DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\DatabaseName_2017-08-07.bak'
WITH COPY_ONLY, NOFORMAT, NOINIT,
NAME = N'SqlClass-Full Database Backup',
SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 5
GO
Assuming that you don't use COMPRESSION option, your databases have 30Gb and 12Gb of data accordingly. Backup backs up only data and not the empty space.
If you are interested in win rar compression ratio maybe you'd better search for its compression algorithm, maybe your second backup just is more "compressible" due to repeating combination of bytes

UFS - how a 0 bytes file broke filesystem header?

For those reaching here; Unfortunately I could not recover the data, after various tries and reproducing the problem it was too costy to keep trying, so we just used a past backup to recreate the information needed
A human error broke an 150G UFS filesystem (Solaris).
When trying to do a backup of the filesytem (c0t0d0s3) the ufsdump(1M) hasn't been correctly used.
I will explain the background that led to this ...
The admin used:
# ufsdump 0f /dev/dsk/c0t0d0s3 > output_1
root#ats-br000432 # ufsdump 0f /dev/dsk/c0t0d0s3 > output_1
Usage: ufsdump [0123456789fustdWwnNDCcbavloS [argument]] filesystem
This is a bad usage, so it created a file called output_1 with 0 bytes:
# ls -la output_1
-rw-r--r-- 1 root root 0 abr 12 14:12 output_1
Then, the syntax used was:
# ufsdump 0f /dev/rdsk/c0t0d0s3 output_1
Which wrote that 0 bytes file output_1 to /dev/rdsk/c0t0d0s3 - which was the partition slice
Now, interestingly, due to being a 0 bytes file, we thought that this would cause no harm to the filesystem, but it did.
When trying to ls in the mountpoint, the partition claimed there was an I/O error, when umounting and mounting again, the filesystem showed no contents, but the disk space was still showing as used just like it was previously.
I assume, at some point, the filesystem 'header' was affected, right? Or was it the slice information?
A small fsck try brings up this:
** /dev/rdsk/c0t0d0s3
** Last Mounted on /database
** Phase 1 - Check Blocks and Sizes
INCORRECT DISK BLOCK COUNT I=11 (400 should be 208)
CORRECT?
Disk block count / I=11
this seems that the command broke filesystem information regarding its own contents, right?
When we tried to fsck -y -F ufs /dev/dsk.. various files have been recovered, but not the dbf files we are after (which are GB sized)
What can be done now? Should I try every superblock information from newfs -N ?
EDIT: new information regarding partition
newfs output showing superblock information
# newfs -N /dev/rdsk/c0t0d0s3
Warning: 2826 sector(s) in last cylinder unallocated
/dev/rdsk/c0t0d0s3: 265104630 sectors in 43149 cylinders of 48 tracks, 128 sectors
129445,6MB in 2697 cyl groups (16 c/g, 48,00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
.....................................................
super-block backups for last 10 cylinder groups at:
264150944, 264241184, 264339616, 264438048, 264536480, 264634912, 264733344,
264831776, 264930208, 265028640

PowerShell Invoke-SqlCmd and memory usage

If I execute the following Powershell command:
Invoke-Sqlcmd `
-ServerInstance '(local)' -Database 'TestDatabase' `
-Query "select top 1000000 * from dbo.SAMPLE_CUSTOMER" | Out-Null
I see my memory usage go through the roof. It uses 1GB of memory.
If I start the command again, memory increases to 1.8GB of memory, then it gets reduced to 800MB (garbage collect?) and starts growing again.
I tried to reduce memory footprint for the PowerShell shell and plugins to 100MB by following the steps in the article http://blogs.technet.com/b/heyscriptingguy/archive/2013/07/30/learn-how-to-configure-powershell-memory.aspx, but I still see memory increase far above the configured 100MB.
I have some questions:
Why does PowerShell not respect the memory limitations given by the setting MaxMemoryPerShellMB?
Why does Invoke-Sqlcmd eat memory and doesn't it "forget" the records processed in the pipeline
Why does the PowerShell process not reclaim memory automatically when finished processing?
How can I process many SQL records without a large memory footprint?

How many bytes per inodes?

I need to create a very high number of files which are not very large (like 4kb,8kb).
It's not possible on my computer cause it takes all inodes up to 100% and I cannot create more files :
$ df -i /dev/sda5
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda5 54362112 36381206 17980906 67% /scratch
(I started deleting files, it's why it's now 67%)
The bytes-per-nodes are of 256 on my filesystem (ext4)
$ sudo tune2fs -l /dev/sda5 | grep Inode
Inode count: 54362112
Inodes per group: 8192
Inode blocks per group: 512
Inode size: 256
I wonder if it's possible to set this value very low even below 128(during reformating). If yes,what value should I use?
Thx
The default bytes per inode is usually 16384, which is the default inode_ratio in /etc/mke2fs.conf (it's read prior to filesystem creation). If you're running out of inodes, you might try for example:
mkfs.ext4 -i 8192 /dev/mapper/main-var2
Another option that affects this is -T, typically -T news which further reduces it to 4096.
Also, you can not change the number of inodes in a ext3 or ext4 filesystem without re-creating or hex-editing it. Reiser filesystems are dynamic so you'll never have an issue with them.
You can find out the approximate inode ratio by dividing the size of available space by the number of available inodes. For example:
$ sudo tune2fs -l /dev/sda1 | awk -F: ' \
/^Block count:/ { blocks = $2 } \
/^Inode count:/ { inodes = $2 } \
/^Block size:/ { block_size = $2 } \
END { blocks_per_inode = blocks/inodes; \
print "blocks per inode:\t", blocks_per_inode, \
"\nbytes per inode:\t", blocks_per_inode * block_size }'
blocks per inode: 3.99759
bytes per inode: 16374.1
I have found solution to my problem on the mke2fs man page :
-I inode-size
Specify the size of each inode in bytes. mke2fs creates 256-byte inodes by default. In kernels after 2.6.10 and some earlier vendor kernels it is possible to utilize
inodes larger than 128 bytes to store extended attributes for improved performance. The inode-size value must be a power of 2 larger or equal to 128. The larger the
inode-size the more space the inode table will consume, and this reduces the usable space in the filesystem and can also negatively impact performance. Extended
attributes stored in large inodes are not visible with older kernels, and such filesystems will not be mountable with 2.4 kernels at all. It is not possible to change
this value after the filesystem is created.
The maximun you will be able to set is given by your block-size.
sudo tune2fs -l /dev/sda5 | grep "Block size"
Block size: 4096
Hope this can help....

Resources