mkdir throws No space left on device , while creating a large file is fine( plenty space and inodes available ) - filesystems

very strange behaviour
I cannot create a directory
[root#XXXXXX DEV]# mkdir 1
mkdir: cannot create directory `1': No space left on device
[root#dev-albert DEV]# pwd
/deployment/.octopus/Applications/OctopusServer/DEV
[root#XXXXXX DEV]# df -P /deployment
Filesystem 1024-blocks Used Available Capacity Mounted on
/dev/mapper/deploymentvg-deployment 10321208 5229888 4567096 54%
/deployment
[root#dev-albert DEV]# df -Pi /deployment
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/deploymentvg-deployment 655360 69129 586231 11%
/deployment
As you can see, plenty of space , good number of inodes free.
Does anyone have any clue what is happening with my system.
[root#dev-albert DEV]# dmsetup ls
rootvg-tmp (252:6)
rootvg-usr (252:7)
rootvg-var (252:8)
deploymentvg-usropenv (252:3)
deploymentvg-deployment (252:2)
rootvg-agent (252:4)
rootvg-oracle (252:11)
rootvg-varlock (252:9)
rootvg-deployment (252:5)
rootvg-swap (252:1)
rootvg-root (252:0)
rootvg-varspool (252:10)
top output
top - 14:44:35 up 347 days, 20:40, 2 users, load average: 2.02, 2.02, 2.05
Tasks: 125 total, 2 running, 123 sleeping, 0 stopped, 0 zombie
Cpu(s):100.0%us, 0.0%sy, 0.0%ni,117100.0%id,-42916200.0%wa, 0.0%hi,
0.0%si,200.0%st
Mem: 4071932k total, 3394132k used, 677800k free, 780312k buffers
Swap: 4194300k total, 22604k used, 4171696k free, 1742552k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
747 root 20 0 0 0 0 S 3.0 0.0 303:33.05 jbd2/dm-2-8
20679 root 20 0 0 0 0 S 2.7 0.0 0:14.24 kworker/0:2
16319 root 20 0 0 0 0 R 2.3 0.0 266:30.45 flush-252:2
When I run mkdir with strace
open("/usr/lib/locale/locale-archive", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=99158576, ...}) = 0
mmap(NULL, 99158576, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fd4c4b04000
close(3) = 0
mkdir("1", 0777) = -1 ENOSPC (No space left on
device)
open("/usr/share/locale/locale.alias", O_RDONLY) = 3
write(2, ": No space left on device", 25: No space left on device) = 25
uname output
Linux 2.6.39-400.17.1.el6uek.x86_64 #1 SMP Fri Feb 22 18:16:18 PST 2013 x86_64 x86_64 x86_64 GNU/Linux

Sometimes that can be caused by the b-tree used by ext4 as directory index hitting its height limit. If you get the No space left on device error on mkdir for some names but not others, or there's plenty of space and inodes, check your dmesg for these warnings:
EXT4-fs warning (device dm-0): ext4_dx_add_entry:2226: Directory (ino: 80087286) index full, reach max htree level :2
EXT4-fs warning (device dm-0): ext4_dx_add_entry:2230: Large directory feature is not enabled on this filesystem
That means you're hitting the b-tree limit, and on a pinch you can enable large directory with:
tune2fs -O large_dir <dev>
It doesn't require unounting or rebooting, and it will increase the limit from 10M to 2B. Depending on what you're doing you're likely to hit performance bottlenecks or actually filling the disk before hitting the limit again, but I recommend rethinking your directory structure to avoid creating too many files and subdirectories in the same directory, and use the above solution only in an emergency.

Related

why open file descriptors are not getting reused instead they are increasing in number value

I have a simple C HTTP server. I close file descriptors for disk files and new connection fds returned by accept(...), but I noticed that I am getting new file descriptor numbers that are bigger than the previous numbers: for example file descriptor from accept return starts with 4, then 5, then 4 again and so on until file descriptor reaches max open file descriptor on a system.
I have set the value to 10,000 on my system but I am not sure why exactly file descriptor number jumps to max value. And I am kind of sure than my program is closing the file descriptors.
So I would like to know if there are not thousands of connections then how come file descriptor new number are increasing periodically: in around 24 hours I get message accept: too many open files. What is this message?
Also, does ulimit -n number value get reset automatically without system reboot?
as mentioned in the answer. The output of _2$ ps aux | grep lh is
dr-x------ 2 fawad fawad 0 Oct 11 11:15 .
dr-xr-xr-x 9 fawad fawad 0 Oct 11 11:15 ..
lrwx------ 1 fawad fawad 64 Oct 11 11:15 0 -> /dev/pts/3
lrwx------ 1 fawad fawad 64 Oct 11 11:15 1 -> /dev/pts/3
lrwx------ 1 fawad fawad 64 Oct 11 11:15 2 -> /dev/pts/3
lrwx------ 1 fawad fawad 64 Oct 11 11:25 255 -> /dev/pts/3
and the output of ls -la /proc/$$/fd is
root 49855 0.5 5.4 4930756 322328 ? Sl Oct09 15:58 /usr/share/atom/atom --executed-from=/home/fawad/Desktop/C++-work/lhparse --pid=49844 --no-sandbox
root 80901 0.0 0.0 25360 5952 pts/4 S+ 09:32 0:00 sudo ./lh
root 80902 0.0 0.0 1100852 2812 pts/4 S+ 09:32 0:00 ./lh
fawad 83419 0.0 0.0 19976 916 pts/3 S+ 11:27 0:00 grep --color=auto lh
I like to know what is pts/4 etc. column. is this the file descriptor number.
It's likely that the socket that is represented by the file descriptor is in close_wait or time_wait state. Which means the TCP stack holds the fd open for a bit longer. So you won't be able to reuse it immediately in this instance.
Once the socket is fully finished with and closed, the file descriptor number will then available for reuse inside your program.
See: https://en.m.wikipedia.org/wiki/Transmission_Control_Protocol
Protocol Operation and specifically Wait States.
To see what files are still open you can run
ls -la /proc/$$/fd
The output of this will also be of help.
ss -tan | head -5
LISTEN 0 511 *:80 *:*
SYN-RECV 0 0 192.0.2.145:80 203.0.113.5:35449
SYN-RECV 0 0 192.0.2.145:80 203.0.113.27:53599
ESTAB 0 0 192.0.2.145:80 203.0.113.27:33605
TIME-WAIT 0 0 192.0.2.145:80 203.0.113.47:50685

Linux get process start time of given PID

I need to get the start time of process using C code in userspace.
The process will run as root, So I can fopen /proc/PID/stat.
I saw implementation, e.g:
start time of a process on linux
or
http://brokestream.com/procstat.c
But they are invalid, Why they are invalid ? if the process 2nd parameter contains space, e.g:
[ilan#CentOS7286-64 tester]$ cat /proc/1077/stat
1077 (rs:main Q:Reg) S 1 1054 1054 0 -1 1077944384 21791 0 10 0 528 464 0 0 20 0 3 0 1056 321650688 1481 18446744073709551615 1 1 0 0 0 0 2146172671 16781830 1133601 18446744073709551615 0 0 -1 1 0 0 1 0 0 0 0 0 0 0 0 0 0
These solutions will not work.
Is there a better way retrieving a process start time other then parsing the /proc/PID/stat results ? I can do the following logic:
read long, first parameter is pid
read char, make sure that i finish reading only when hitting close ')'. - 2nd parameter is tcomm (filename of the executable)
read char - 3rd parameter process state.
In solaris, you simply read the result to psinfo_t struct.
You can simply use the stat(2) kernel call.
The creation time is not set by the proc file system. But you can use the modification time, because the modification time of a directory changes only, if files are added to or removed from the directory. And because the content of a directory in the proc filesystem changes only, if you replace the kernel module of the running kernel, you can be pretty sure, that the modification time is also the creation time.
Example:
$ stat -c %y /proc/1
2018-06-01 11:46:57.512000000 +0200
$ uptime -s
2018-06-01 11:46:57

Informix initialisation fails because the master daemon dies

I'm trying to install a Informix Developer instance on a debian 7 server, to get to test a couple of programs with Informix as Database engine. I tried all solutions I could find googling the error messages, but it kept me going back and forth. Can any body see what my problem is?
The log:
14:46:46 Parameter's user-configured value was adjusted.
(DS_MAX_SCANS)
14:46:46 Parameter's user-configured value was adjusted. (ONLIDX_MAXMEM)
14:46:46 IBM Informix Dynamic Server Started.
14:46:46 Warning: The IBM IDS Developer Edition license restriction limits
14:46:46 the total shared memory size for this server to 1048576 KB.
14:46:46 The maximum allowable shared memory was reset to this size to start the database server.
14:46:46 Requested shared memory segment size rounded from 8308KB to 8840KB
Sat Dec 19 14:46:48 2015
14:46:48 Successfully added a bufferpool of page size 2K.
14:46:48 Event alarms enabled. ALARMPROG = '/etc/Informix/etc/alarmprogram.sh'
14:46:48 Booting Language <c> from module <>
14:46:48 Loading Module <CNULL>
14:46:48 Booting Language <builtin> from module <>
14:46:48 Loading Module <BUILTINNULL>
14:46:53 DR: DRAUTO is 0 (Off)
14:46:53 DR: ENCRYPT_HDR is 0 (HDR encryption Disabled)
14:46:53 Event notification facility epoll enabled.
14:46:53 CCFLAGS2 value set to 0x200
14:46:53 SQL_FEAT_CTRL value set to 0x8008
14:46:53 SQL_DEF_CTRL value set to 0x4b0
14:46:53 IBM Informix Dynamic Server Version 12.10.FC5DE Software Serial Number AAA#B000000
14:46:53 requested number of KAIO events (32768) exceeds limit (31437). using 31437.
14:46:54 Value of FULL_DISK_INIT has been changed to 0.
14:46:54 Performance Advisory: The physical log size is smaller than the recommended size for a
server configured with RTO_SERVER_RESTART.
14:46:54 Results: Fast recovery performance might not be optimal.
14:46:54 Action: For best fast recovery performance when RTO_SERVER_RESTART is enabled,
increase the physical log size to at least 66602 KB. For servers
configured with a large buffer pool, this might not be necessary.
14:46:54 IBM Informix Dynamic Server Initialized -- Complete Disk Initialized.
14:46:54 Started 1 B-tree scanners.
14:46:54 B-tree scanner threshold set at 5000.
14:46:54 B-tree scanner range scan size set to -1.
14:46:54 B-tree scanner ALICE mode set to 6.
14:46:54 B-tree scanner index compression level set to med.
14:46:54 Warning: Invalid (non-existent/blobspace/disabled) dbspace listed
in DBSPACETEMP: 'tempdbs'
14:46:55 The Master Daemon Died
14:46:55 invoke_alarm(): /bin/sh -c '/etc/Informix/etc/alarmprogram.sh 5 6 "Internal Subsystem failure: 'MT'" "The Master Daemon Died" "" 6069'
14:46:55 invoke_alarm(): mt_exec failed, status 32512, errno 0
14:46:55 PANIC: Attempting to bring system down
And if it helps the onconfig file the database
ROOTNAME rootdbs
ROOTPATH /home/informix_storage/rootdbs
ROOTOFFSET 0
ROOTSIZE 2000000
MIRROR 0
MIRRORPATH $INFORMIXDIR/tmp/demo_on.root_mirror
MIRROROFFSET 0
PHYSFILE 29792
PLOG_OVERFLOW_PATH $INFORMIXDIR/tmp
PHYSBUFF 512
LOGFILES 4
LOGSIZE 7168
DYNAMIC_LOGS 2
LOGBUFF 256
LTXHWM 70
LTXEHWM 80
MSGPATH /etc/Informix/informix_inst_01.log
CONSOLE $INFORMIXDIR/tmp/online.con
TBLTBLFIRST 0
TBLTBLNEXT 0
TBLSPACE_STATS 1
DBSPACETEMP tempdbs
SBSPACETEMP
SBSPACENAME sbspace
SYSSBSPACENAME
ONDBSPACEDOWN 2
SERVERNUM 0
DBSERVERNAME informix_inst_01
DBSERVERALIASES dr_informix_inst_01, lo_informix_inst_01
FULL_DISK_INIT 0
NETTYPE ipcshm,1,50,CPU
LISTEN_TIMEOUT 60
MAX_INCOMPLETE_CONNECTIONS 1024
FASTPOLL 1
NUMFDSERVERS 4
NS_CACHE host=900,service=900,user=900,group=900
MULTIPROCESSOR 1
VPCLASS cpu,num=1,noage
VP_MEMORY_CACHE_KB 0
SINGLE_CPU_VP 1
AUTO_TUNE 1
CLEANERS 2
DIRECT_IO 1
LOCKS 40000
DEF_TABLE_LOCKMODE page
RESIDENT 0
SHMBASE 0x44000000L
SHMVIRTSIZE 101376
SHMADD 3165
EXTSHMADD 8192
SHMTOTAL 0
SHMVIRT_ALLOCSEG 0,3
SHMNOACCESS
CKPTINTVL 300
RTO_SERVER_RESTART 60
BLOCKTIMEOUT 3600
CONVERSION_GUARD 2
RESTORE_POINT_DIR $INFORMIXDIR/tmp
TXTIMEOUT 300
DEADLOCK_TIMEOUT 60
HETERO_COMMIT 0
TAPEDEV /dev/null
TAPEBLK 32
TAPESIZE 0
LTAPEDEV /dev/null
LTAPEBLK 32
LTAPESIZE 0
BAR_ACT_LOG $INFORMIXDIR/tmp/bar_act.log
BAR_DEBUG_LOG $INFORMIXDIR/tmp/bar_dbug.log
BAR_DEBUG 0
BAR_MAX_BACKUP 0
BAR_RETRY 1
BAR_NB_XPORT_COUNT 20
BAR_XFER_BUF_SIZE 31
RESTARTABLE_RESTORE ON
BAR_PROGRESS_FREQ 0
BAR_BSALIB_PATH $INFORMIXDIR/lib/libbsapsm.so
BACKUP_FILTER
RESTORE_FILTER
BAR_PERFORMANCE 0
BAR_CKPTSEC_TIMEOUT 15
PSM_DBS_POOL DBSPOOL
PSM_LOG_POOL LOGPOOL
DD_HASHSIZE 31
DD_HASHMAX 10
DS_HASHSIZE 31
DS_POOLSIZE 127
PC_HASHSIZE 31
PC_POOLSIZE 127
PRELOAD_DLL_FILE
STMT_CACHE 0
STMT_CACHE_HITS 0
STMT_CACHE_SIZE 512
STMT_CACHE_NOLIMIT 0
STMT_CACHE_NUMPOOL 1
USEOSTIME 0
STACKSIZE 64
ALLOW_NEWLINE 0
USELASTCOMMITTED "NONE"
FILLFACTOR 90
MAX_FILL_DATA_PAGES 0
BTSCANNER num=1,threshold=5000,rangesize=-1,alice=6,compression=default
ONLIDX_MAXMEM 5242880
MAX_PDQPRIORITY 100
DS_MAX_QUERIES 5
DS_TOTAL_MEMORY 26214400
DS_MAX_SCANS 5
DS_NONPDQ_QUERY_MEM 1310720
DATASKIP
OPTCOMPIND 2
DIRECTIVES 1
EXT_DIRECTIVES 0
OPT_GOAL -1
IFX_FOLDVIEW 1
STATCHANGE 10
USTLOW_SAMPLE 1
BATCHEDREAD_TABLE 1
BATCHEDREAD_INDEX 1
EXPLAIN_STAT 1
IFX_EXTEND_ROLE 1
SECURITY_LOCALCONNECTION
UNSECURE_ONSTAT
ADMIN_USER_MODE_WITH_DBSA
ADMIN_MODE_USERS
SSL_KEYSTORE_LABEL
TLS_VERSION
PLCY_POOLSIZE 127
PLCY_HASHSIZE 31
USRC_POOLSIZE 127
USRC_HASHSIZE 31
SQL_LOGICAL_CHAR OFF
SEQ_CACHE_SIZE 10
ENCRYPT_HDR
ENCRYPT_SMX
ENCRYPT_CDR 0
ENCRYPT_CIPHERS
ENCRYPT_MAC
ENCRYPT_MACFILE
ENCRYPT_SWITCH
CDR_EVALTHREADS 1,2
CDR_DSLOCKWAIT 5
CDR_QUEUEMEM 4096
CDR_NIFCOMPRESS 0
CDR_SERIAL 0
CDR_DBSPACE
CDR_QHDR_DBSPACE
CDR_QDATA_SBSPACE
CDR_SUPPRESS_ATSRISWARN
CDR_DELAY_PURGE_DTC 0
CDR_LOG_LAG_ACTION ddrblock
CDR_LOG_STAGING_MAXSIZE 0
CDR_MAX_DYNAMIC_LOGS 0
GRIDCOPY_DIR $INFORMIXDIR
CDR_TSINSTANCEID 0
CDR_MAX_FLUSH_SIZE 50
CDR_AUTO_DISCOVER 0
CDR_MEM 0
DRAUTO 0
DRINTERVAL 0
HDR_TXN_SCOPE NEAR_SYNC
DRTIMEOUT 30
HA_ALIAS
HA_FOC_ORDER SDS,HDR,RSS
DRLOSTFOUND $INFORMIXDIR/etc/dr.lostfound
DRIDXAUTO 0
LOG_INDEX_BUILDS
SDS_ENABLE
SDS_TIMEOUT 20
SDS_TEMPDBS
SDS_PAGING
SDS_LOGCHECK 10
SDS_ALTERNATE NONE
SDS_FLOW_CONTROL 0
UPDATABLE_SECONDARY 0
FAILOVER_CALLBACK
FAILOVER_TX_TIMEOUT 0
TEMPTAB_NOLOG 0
DELAY_APPLY 0
STOP_APPLY 0
LOG_STAGING_DIR
RSS_FLOW_CONTROL 0
SMX_NUMPIPES 1
ENABLE_SNAPSHOT_COPY 0
SMX_COMPRESS 0
SMX_PING_INTERVAL 10
SMX_PING_RETRY 6
CLUSTER_TXN_SCOPE SERVER
ON_RECVRY_THREADS 2
OFF_RECVRY_THREADS 5
DUMPDIR $INFORMIXDIR/tmp
DUMPSHMEM 1
DUMPGCORE 0
DUMPCORE 0
DUMPCNT 1
ALARMPROGRAM $INFORMIXDIR/etc/alarmprogram.sh
ALRM_ALL_EVENTS 0
STORAGE_FULL_ALARM 600,3
SYSALARMPROGRAM $INFORMIXDIR/etc/evidence.sh
RAS_PLOG_SPEED 14896
RAS_LLOG_SPEED 0
EILSEQ_COMPAT_MODE 0
QSTATS 0
WSTATS 0
USERMAPPING OFF
SP_AUTOEXPAND 1
SP_THRESHOLD 0
SP_WAITTIME 30
AUTOLOCATE 0
DEFAULTESCCHAR \
MQSERVER
MQCHLLIB
MQCHLTAB
REMOTE_SERVER_CFG
REMOTE_USERS_CFG
S6_USE_REMOTE_SERVER_CFG 0
LOW_MEMORY_RESERVE 0
LOW_MEMORY_MGR 0
GSKIT_VERSION
INFORMIXCONTIME 60
INFORMIXCONRETRY 1
JVPPROPFILE $INFORMIXDIR/extend/krakatoa/.jvpprops
JVPLOGFILE $INFORMIXDIR/tmp/jvp.log
JVPARGS -Dcom.ibm.tools.attach.enable=no
JVPCLASSPATH INFORMIXDIR/extend/krakatoa/jdbc.jar
BUFFERPOOL default,memory='auto'
BUFFERPOOL size=2k,memory=500MB
NETTYPE onsoctcp,1,150,NET
NETTYPE drsoctcp,1,150,NET
AUTO_TUNE_SERVER_SIZE SMALL
Thank you for any help!

NFS v4 with fast network and average IOPS disk. Load increase high on large file transfer

NFS v4 with fast network and average IOPS disk. Load increase high on large file transfer.
The problem seems to be IOPS.
The test case:
/etc/exports
server# /mnt/exports 192.168.6.0/24(rw,sync,no_subtree_check,no_root_squash,fsid=0)
server# /mnt/exports/nfs 192.168.6.0/24(rw,sync,no_subtree_check,no_root_squash)
client# mount -t nfs 192.168.6.131:/nfs /mnt/nfstest -vvv
(or client# mount -t nfs 192.168.6.131:/nfs /mnt/nfstest -o nfsvers=4,tcp,port=2049,async -vvv)
It works well wits 'sync' flag but the transger drops form 50MB/s to 500kb/s
http://ubuntuforums.org/archive/index.php/t-1478413.html
The topic seems to be solved by reducing wsize to wsize=300 - small improvement but not the solution.
Simple test with dd:
client# dd if=/dev/zero bs=1M count=6000 |pv | dd of=/mnt/nfstest/delete_me
server# iotop
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
1863 be/4 root 0.00 B/s 14.17 M/s 0.00 % 21.14 % [nfsd]
1864 be/4 root 0.00 B/s 7.42 M/s 0.00 % 17.39 % [nfsd]
1858 be/4 root 0.00 B/s 6.32 M/s 0.00 % 13.09 % [nfsd]
1861 be/4 root 0.00 B/s 13.26 M/s 0.00 % 12.03 % [nfsd]
server# dstat -r --top-io-adv --top-io --top-bio --aio -l -n -m
--io/total- -------most-expensive-i/o-process------- ----most-expensive---- ----most-expensive---- async ---load-avg--- -NET/total- ------memory-usage-----
read writ|process pid read write cpu| i/o process | block i/o process | #aio| 1m 5m 15m | recv send| used buff cach free
10.9 81.4 |init [2] 1 5526B 20k0.0%|init [2] 5526B 20k|nfsd 10B 407k| 0 |2.92 1.01 0.54| 0 0 |29.3M 78.9M 212M 4184k
1.00 1196 |sshd: root#pts/0 1943 1227B1264B 0%|sshd: root#1227B 1264B|nfsd 0 15M| 0 |2.92 1.01 0.54| 44M 319k|29.1M 78.9M 212M 4444k
0 1365 |sshd: root#pts/0 1943 485B 528B 0%|sshd: root# 485B 528B|nfsd 0 16M| 0 |2.92 1.01 0.54| 51M 318k|29.5M 78.9M 212M 4708k
Do You know any way of limiting the load without big changes in the configuration?
I do consider limiting the network speed with wondershaper or iptables, though it is not nice since other traffic would be harmed as well.
Someone suggested cgroups - may be worth solving - but it still it is not my 'feng shui' - I would hope to find solution in NFS config - since the problem is here, would be nice to have in-one-place-solution.
If that would be possible to increase 'sync' speed to 10-20MB/s that would be enough for me.
I think I nailed it:
On the server, change disk scheduller:
for i in /sys/block/sd*/queue/scheduler ; do echo deadline > $i ; done
additionally (small improvement - find the best value for You):
/etc/default/nfs-kernel-server
# Number of servers to start up
-RPCNFSDCOUNT=8
+RPCNFSDCOUNT=2
restart services
/etc/init.d/rpcbind restart
/etc/init.d/nfs-kernel-server restart
ps:
My current configs
server:
/etc/exports
/mnt/exports 192.168.6.0/24(rw,no_subtree_check,no_root_squash,fsid=0)
/mnt/exports/nfs 192.168.6.0/24(rw,no_subtree_check,no_root_squash)
client:
/etc/fstab
192.168.6.131:/nfs /mnt/nfstest nfs rsize=32768,wsize=32768,tcp,port=2049 0 0

Why do dots ("." and "..") appear when I print files from directory?

I'm printing files from two directories using C language. Here is my code:
char *list1[30], *list2[30];
int i=0, x=0;
struct dirent *ent, *ent1;
/* print all the files and directories within directory */
while ((ent = readdir (dirSource)) != NULL) {
list1[i] = ent->d_name;
i++;
}
i=0;
while((ent1 = readdir (dirDest)) != NULL) {
list2[i] = ent1->d_name;
i++;
}
while(x != i){
printf("Daemon - %s\n", list1[x]);
printf("Daemon1 - %s\n", list2[x]);
x++;
}
I can print all the files, but everytime I print the files in a directory, the end result is this:
Daemon - .
Daemon1 - .
Daemon - ..
Daemon1 - ..
Daemon - fich5
Daemon1 - fich4
Daemon - fich3
Daemon1 - fich3
I don't understand why there are dots in the beginning.
Obs.: I don't if it matters, but I'm using Ubuntu 14.04 on a pen, meaning every time I use Ubuntu, I use the trial instead of using dual boot on my pc.
. and .. are two special files which are in every directory in Linux and other Unix-like systems. . represents the current directory and .. represents the parent directory.
Every directory in Unix has the entry . (meaning current directory) and .. (the parent directory).
Give that they start with "." they are hidden files; ls normally do not show them unless you use "-a" option.
See:
[:~/tmp/lilla/uff] % ls -l
total 0
-rw-rw-r-- 1 romano romano 0 May 17 18:48 a
-rw-rw-r-- 1 romano romano 0 May 17 18:48 b
[:~/tmp/lilla/uff] % ls -la
total 8
drwxrwxr-x 2 romano romano 4096 May 17 18:48 .
drwxrwxr-x 3 romano romano 4096 May 17 18:47 ..
-rw-rw-r-- 1 romano romano 0 May 17 18:48 a
-rw-rw-r-- 1 romano romano 0 May 17 18:48 b

Resources