qt the best way to sync files with a sqlite db - database

I'm looking for the best way to sync files in some directories with a sqlite db.
First of all I use a thread that recursively look for files filtered by extension and add they do my db.
Next I use QFileSystemWatcher to watch if files change and it's work well.
The problem is that each time I run the app I don't know if the files are changed so I need to run the thread and it take 100% of cpu of one core during the execution (about 1 minute)
So how can I do to improve this algorithm?
Thanks
Regards
A993
edit:
The code is a recursive function, similar to this function that I use to count files in a directory (also this function take 100% of cpu)
int MediaScan_Thread::recursiveCount(QDir &dir)
{
int i=dir.entryInfoList(_filters,QDir::Files).count();
foreach(QFileInfo info, dir.entryInfoList(QDir::Dirs | QDir::NoDotAndDotDot))
{
QDir subdir(info.absoluteFilePath());
i += recursiveCount(subdir);
}
return i;
}
I'm working on linux but I would develop a multiplatform app.

I would iterate over one entryList() list, recursing the directories and checking the file filter on the files. There isn't an easy way to recursively search for file listings over multiple threads. But once you get the file listing, parallel processing on it should be easy.
I combined the calls for count and for the file listings to one hit on the I/O because there shouldn't be any reason to do this twice. This version will keep track of a QStringList of the query so more processing can be done later.
Using foreach on a recursive function can be problematic as a copy of the list is made so I switched to using a for-loop with iterators.
The special addition is mimicing the QDir nameFilters functionality since entryList() takes a fileFilters parameter that it uses for both directories and files (not what we want).
A feature that I omitted is a recursion depth limit to avoid searching forever.
This code sample was compiled but not tested
// declare in MediaScan_Thread and set them in constructor or wherever it needs to be:
QVector<QRegExp> _nameRegExps;
QStringList _filters;
QDir::Filters _dirFilters;
// ....
void MediaScan_Thread::initFilterRegExp()
{
_nameRegExps.clear();
for (int i = 0; i < _filters.size(); ++i)
{
_nameRegExps.append( QRegExp(_filters.at(i), (_dirFilters & QDir::CaseSensitive) ? Qt::CaseSensitive : Qt::CaseInsensitive, QRegExp::Wildcard) );
}
}
int MediaScan_Thread::recursiveCountAndMatchedFiles(QDir &dir, QStringList& matchedFiles )
{
int i = 0;
QFileInfoList lst = dir.entryInfoList( QStringList() , QDir::Files | QDir::Dirs | QDir::NoDotAndDotDot );
for ( auto itr = lst.begin(); itr != lst.end(); itr++ )
{
QFileInfo &info = (*itr);
if (info.isDir())
i += recursiveCountAndMatchedFiles( info.absoluteDir(), matchedFiles );
else
{
QString fileName = info.absoluteFilePath();
for (auto iter = _nameRegExps.constBegin(), end = _nameRegExps.constEnd();
iter != end; ++iter)
{
if (iter->exactMatch(fileName)) {
i++;
matchedFiles << fileName;
break;
}
}
}
}
return i;
}

Related

Remove file from directory - ext-like file system implementation

I have an issue. I'm currently trying to implement an ext-ish file system. I've done the inode operations such as read and write. I've created a structure that represents both a regular file and a directory. I have a problem when trying to remove a certain file from the directory.
char
dirremove(struct dirent *dir, struct dirent *file)
{
dirent_t n = {.mode = NODDIR, .inumber = remdirnod,
.r = 0, .w = 0};
strcpy(n.nm, dir->nm);
dirent_t t;
dir->r = 0;
char r = 1;
while (!dirread(dir, &t))
{
int tt = dir->r;
dir->r = 0;
dirent_t ff[3];
filread(ff, dir, 3 * entrysiz);
dir->r = tt;
if (!strcmp(t.nm, ""))
return 1;
if (!(!strcmp(t.nm, file->nm) && !(r = 0)))
assert(!dirappend(&n, &t));
}
assert(n.w == dir->w - entrysiz);
dir->w = n.w;
dir->r = n.r;
copyinode(dir->inumber, remdirnod);
return r;
}
This is the function called from the rm command. It takes the directory object (where the file is stored) and the file object to be deleted. I know this solution is not the best in terms of speed and memory usage but I'm still a beginner in this area, so don't hate me a lot, please :).
The function is designed to do the following. It has to read all files and check if the current is the one to be deleted. If not, the file is added to a new directory (empty in the beginning) which will replace the old one at the end of the function. The "new" directory is an entry saved entirely for this purpose, so there isn't a chance that all inodes are already used.
The test that I've done is to create a file (works fine), then remove it, then create it again and remove it. Everything works perfectly except for the second execution of the dirremove function. The directory has its dot and dot-dot directories by default so it goes through them first. The result is that the first deletion is successful. Everything works perfectly. But the second time things go wrong somewhere.
int tt = dir->r;
dir->r = 0;
dirent_t ff[3];
filread(ff, dir, 3 * entrysize;
dir->r = tt;
I added the ff array that should read the whole content of the directory and this would help me figure out if the correct files are there. On the first and second iteration, all files (".", ".." and "some-other-file") are there but at the iteration which should hold the object of the file that's to be removed the third file suddenly goes all zeroes.
I've debugged this for several hours but it continues to fail the same way.
Probably I didn't explain the failure the best way, but there are a lot of things that I forgot to say, so if I missed something please don't ignore the question and just ask about it.

Merge sort large file in parallel with memory limit (Linux)

I need to sort large binary file of size M, using t threads. Records in file are all equal size. The task explicitly says that the amount of memory I can allocate is m, and is much smaller than M. Also hard drive is guaranteed to have at least 2 * M free space. This calls for merge sort ofc, but turned out it's not so obvious. I see three different approaches here:
A. Map files input, temp1 and temp2 into memory. Perform merge sort input -> temp1 -> temp2 -> temp1 ... until one of temps sorted. Threads only contend for selecting next portion of work , no contention on read/write.
B. fopen 3 files t times each, each thread gets 3 FILE pointers, one per file. Again they contend only for next portion of work, reads and writes should work in parallel.
C. fopen 3 files one time each, keep them under mutexes, all threads work in parallel but to grab more work or to read or to write they lock respective mutex.
Notes:
In real life I would choose A for sure. But doesn't it defeat the whole purpose of having limited buffer? (In other words isn't it cheating?). With such approach I can even radix sort whole file in place without extra buffer. Also this solution is Linux-specific, I think Linux is implied from conversation, but it's not stated explicitly in task description.
Regarding B, I think it works on Linux but isn't portable, see Linux note above.
Regarding C, it's portable but I am not sure how to optimize it (e.g. 8 threads with small enough m will just bump waiting their turn in queue, then read/write tiny portion of data, then instantly sort it and bump into each other again. IMO unlikely to work faster than 1 thread).
Questions:
Which solution is a better match for the task?
Which solution is a better design in real life (assuming Linux)?
Does B work? In other words is opening file multiple times and writing in parallel (to different parts of it) legal?
Any alternative approaches?
Your question has many facets, so I will try to break it down a bit, while trying to answer almost all of your questions:
You are given a large file on a storage device that probably operates on blocks, i.e. you can load and store many entries at the same time. If you access a single entry from storage, you have to deal with rather large access latency which you can only try to hide by loading many elements at the same time thus amortizing the latency over all element load times.
Your main memory is quite fast compared to the storage (especially for random access), so you want to keep as much data in main memory as possible and only read and write sequential blocks on the storage. This is also the reason why A is not really cheating, since if you tried to use your storage for random access, you would be waaay slower than using main memory.
Combining these results, you can arrive at the following approach, which is basically A but with some engineering details that are usually used in external algorithms.
Use only a single dedicated thread for reading and writing on the storage.
This way, you need only one file descriptor for every file and could in theory even collect and reorder read and write requests from all threads within a small timeframe to get nearly sequential access patterns. Additionally, your threads can just queue a write request and continue with the next block without waiting for the IO to finish.
Load t blocks (from input) into main memory of a maximum size such that you can run mergesort in parallel on each of these blocks. After the blocks are sorted, write them onto the storage as temp1.
Repeat this until all blocks in the file have been sorted.
Now do a so-called multiway merge on the sorted blocks:
Every thread loads a certain number k of consecutive blocks from temp1 into memory and merges them using a priority queue or tournament tree to find the next minimum to be inserted into the resulting block. As soon as your block is full, you write it onto your storage at temp2 to free up memory for the next block. After this step, conceptually swap temp1 and temp2
You still need to do several merge steps, but this number is down by a factor of log k compared to regular two-way merges you probably meant in A. After the first few merge steps, your blocks will probably be too large to fit into main memory, so you split them into smaller blocks and, starting from the first small block, fetch the next block only when all of the previous elements have already been merged. Here, you might even be able to do some prefetching since the order of block accesses is predetermined by the block minima, but this is probably outside the scope of this question.
Note that the value for k is usually only limited by available memory.
Finally, you arrive at t huge blocks which need to be merged together. I don't really know if there is a nice parallel approach to this, it might be necessary to just merge them sequentially, so again you can work with a t-way merge as above to result in a single sorted file.
Gnu sort is a multi-threaded merge sort for text files, but it's basic features could be used here. Define a "chunk" as the number of records that can be sorted in memory of size m.
Sort phase: for each "chunk" of records, read a "chunk" of records, use a multi-threaded sort on the "chunk" then write a "chunk" of records to a temp file, ended up with ceiling(M / m) temp files. Gnu sort sorts an array of pointers to records, partially because the records are variable length. For fixed size records, in my testing, due to cache issues, it's faster to sort records directly rather than sort an array of pointers to records (which results in cache unfriendly random access of records), unless record size is greater than somewhere between 128 and 256 bytes.
Merge phase: perform single threaded k-way merges (such as priority queue) on the temp files until a single file is produced. Multi-threading doesn't help here since it's assumed that the k-way merge phase is I/O bound and not cpu bound. For Gnu sort the default for k is 16 (it does 16-way merges on the temp files).
To keep from exceeding 2 x M space, files will need to be deleted once they have been read.
If your file is way bigger than your RAM size then This is the solution. https://stackoverflow.com/a/49839773/1647320
If your file size is 70-80% of your RAM size then following is the solution. It's in-memory parallel merge sort.
Change this lines according to your system . fpath is your one big input file. shared path is where the execution log is stored.fdir is where the intermediate files will be stored and merged. Change these paths according to your machine.
public static final String fdir = "/tmp/";
public static final String shared = "/exports/home/schatterjee/cs553-pa2a/";
public static final String fPath = "/input/data-20GB.in";
public static final String opLog = shared+"Mysort20GB.log";
Then run the following programme. Your final sorted file will be created with the name op2GB in fdir path. the last line Runtime.getRuntime().exec("valsort " + fdir + "op" + (treeHeight*100)+1 + " > " + opLog); checks the output is sorted or not . Remove this line if you dont have valsort installed in your machine or the input file is not generated using gensort(http://www.ordinal.com/gensort.html) .
Also, don't forget to change int totalLines = 20000000; to the total lines in your file. and thread count (int threadCount = 8) should be always in power of 2.
import java.io.*;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.LinkedList;
import java.util.Comparator;
import java.util.HashMap;
import java.util.stream.Stream;
class SplitJob extends Thread {
LinkedList<String> chunkName;
int startLine, endLine;
SplitJob(LinkedList<String> chunkName, int startLine, int endLine) {
this.chunkName = chunkName;
this.startLine = startLine;
this.endLine = endLine;
}
public void run() {
try {
int totalLines = endLine + 1 - startLine;
Stream<String> chunks =
Files.lines(Paths.get(Mysort2GB.fPath))
.skip(startLine - 1)
.limit(totalLines)
.sorted(Comparator.naturalOrder());
chunks.forEach(line -> {
chunkName.add(line);
});
System.out.println(" Done Writing " + Thread.currentThread().getName());
} catch (Exception e) {
System.out.println(e);
}
}
}
class MergeJob extends Thread {
int list1, list2, oplist;
MergeJob(int list1, int list2, int oplist) {
this.list1 = list1;
this.list2 = list2;
this.oplist = oplist;
}
public void run() {
try {
System.out.println(list1 + " Started Merging " + list2 );
LinkedList<String> merged = new LinkedList<>();
LinkedList<String> ilist1 = Mysort2GB.sortedChunks.get(list1);
LinkedList<String> ilist2 = Mysort2GB.sortedChunks.get(list2);
//Merge 2 files based on which string is greater.
while (ilist1.size() != 0 || ilist2.size() != 0) {
if (ilist1.size() == 0 ||
(ilist2.size() != 0 && ilist1.get(0).compareTo(ilist2.get(0)) > 0)) {
merged.add(ilist2.remove(0));
} else {
merged.add(ilist1.remove(0));
}
}
System.out.println(list1 + " Done Merging " + list2 );
Mysort2GB.sortedChunks.remove(list1);
Mysort2GB.sortedChunks.remove(list2);
Mysort2GB.sortedChunks.put(oplist, merged);
} catch (Exception e) {
System.out.println(e);
}
}
}
public class Mysort2GB {
//public static final String fdir = "/Users/diesel/Desktop/";
public static final String fdir = "/tmp/";
public static final String shared = "/exports/home/schatterjee/cs553-pa2a/";
public static final String fPath = "/input/data-2GB.in";
public static HashMap<Integer, LinkedList<String>> sortedChunks = new HashMap();
public static final String opfile = fdir+"op2GB";
public static final String opLog = shared + "mysort2GB.log";
public static void main(String[] args) throws Exception{
long startTime = System.nanoTime();
int threadCount = 8; // Number of threads
int totalLines = 20000000;
int linesPerFile = totalLines / threadCount;
LinkedList<Thread> activeThreads = new LinkedList<Thread>();
for (int i = 1; i <= threadCount; i++) {
int startLine = i == 1 ? i : (i - 1) * linesPerFile + 1;
int endLine = i * linesPerFile;
LinkedList<String> thisChunk = new LinkedList<>();
SplitJob mapThreads = new SplitJob(thisChunk, startLine, endLine);
sortedChunks.put(i,thisChunk);
activeThreads.add(mapThreads);
mapThreads.start();
}
activeThreads.stream().forEach(t -> {
try {
t.join();
} catch (Exception e) {
}
});
int treeHeight = (int) (Math.log(threadCount) / Math.log(2));
for (int i = 0; i < treeHeight; i++) {
LinkedList<Thread> actvThreads = new LinkedList<Thread>();
for (int j = 1, itr = 1; j <= threadCount / (i + 1); j += 2, itr++) {
int offset = i * 100;
int list1 = j + offset;
int list2 = (j + 1) + offset;
int opList = itr + ((i + 1) * 100);
MergeJob reduceThreads =
new MergeJob(list1,list2,opList);
actvThreads.add(reduceThreads);
reduceThreads.start();
}
actvThreads.stream().forEach(t -> {
try {
t.join();
} catch (Exception e) {
}
});
}
BufferedWriter writer = Files.newBufferedWriter(Paths.get(opfile));
sortedChunks.get(treeHeight*100+1).forEach(line -> {
try {
writer.write(line+"\r\n");
}catch (Exception e){
}
});
writer.close();
long endTime = System.nanoTime();
double timeTaken = (endTime - startTime)/1e9;
System.out.println(timeTaken);
BufferedWriter logFile = new BufferedWriter(new FileWriter(opLog, true));
logFile.write("Time Taken in seconds:" + timeTaken);
Runtime.getRuntime().exec("valsort " + opfile + " > " + opLog);
logFile.close();
}
}
[1]: https://i.stack.imgur.com/5feNb.png

Restore or remove the Linux Kernel Module from sysfs

I recently coded a LKM which has the ability to hide itself. All works just fine when I hide the module but when I restore it and look at it in the lsmod the value of the Used By column suddenly is -2
Module Size Used by
my_module 13324 -2
vboxsf 43798 1
dm_crypt 23177 0
nfsd 284396 2
auth_rpcgss 59309 1 nfsd
nfs_acl 12837 1 nfsd
nfs 240815 0
and when I remove it i get the error saying rmmod: ERROR: Module my_module is builtin. I know that it is a refcount for the kobject associated with the module and the module can only be removed when it is 0. I am almost certain that it happens because when I hide the module I delete all of its files in the /sys/modules.(holders, parameters, sections, srcversion etc.). Can someone help me with the remove operation or restore the files back?(I don't get any errors in the dmesg)
Here is the code:
`
void module_hide(void) {
if(module_hidden) //is hidden
return;
module_prev = THIS_MODULE->list.prev;
kobject_prev = &THIS_MODULE->mkobj.kobj;
kobject_parent_prev = THIS_MODULE->mkobj.kobj.parent;
sect_attrs_bkp = THIS_MODULE->sect_attrs;
notes_attrs_bkp = THIS_MODULE->notes_attrs;
list_del(&THIS_MODULE->list); //remove from procfs
//kobject_del(THIS_MODULE->holders_dir);
kobject_del(&THIS_MODULE->mkobj.kobj); //remove from sysfs
THIS_MODULE->sect_attrs = NULL;
THIS_MODULE->notes_attrs = NULL;
module_hidden = (unsigned int)0x1;
}
void module_show(void) {
int result, result2;
if(!module_hidden) //is not hidden
return;
list_add(&THIS_MODULE->list, module_prev); //add to procfs
result = kobject_add(&THIS_MODULE->mkobj.kobj, kobject_parent_prev, "my_module"); //add the module to sysfs
if(result<0) {
printk(KERN_ALERT "Error to restore the old kobject\n");
}
result2 = kobject_add(THIS_MODULE->holders_dir, &THIS_MODULE->mkobj.kobj, "holders"); //add the holders dir to the module folder
if(!THIS_MODULE->holders_dir) {
printk(KERN_ALERT "Error to restore the old holders_dir\n");
}
THIS_MODULE->sect_attrs = sect_attrs_bkp;
THIS_MODULE->notes_attrs = notes_attrs_bkp;
//kobject_get(&THIS_MODULE->mkobj.kobj);
//tried using THIS_MODULE->refcnt = 0; and kobject_get(&THIS_MODULE->mkobj.kob) with no luck
module_hidden = (unsigned int)0x0;
}
Thanks
Using kobject_add will only add the directory as you already know, while using kobject_dell will remove the direcotry and all subdirectories.
Hence as you mention you need to add all of the subdires needed.
To understand what is the way of adding the subdirs, you read the source code of sys_init_module carefully at module.c or read kobject_del->sys_remove_dir
which remove all attributes(files) and subdirs when clear recursively kobj->kernfs_nodes.
Thus, you need create the struct recursivly with all his attrs using the functions
sysfs_add_file_mode_ns
sysfs_create_dir_ns
or:
__kernfs_create_file
kernfs_create_empty_dir
for example to add the sections file use the follwoing line:
sysfs_create_group(&THIS_MODULE->mkobj.kobj, &sect_attrs_bkp->grp))
You need you change more values in order to fix the problem but to restore the directories this will be enough.
But other solution and perherp easier one would be just to make you module directory unvisible by hijacking getdents_t and getdents64_t as done at Diamorphine.
I solved it
static void populate_sysfs(void)
{
int i;
THIS_MODULE->holders_dir=kobject_create_and_add("holders",&THIS_MODULE->mkobj.kobj);
for (i=0;(THIS_MODULE->modinfo_attrs[i].attr.name) != NULL;i++){
if (sysfs_create_file(&THIS_MODULE->mkobj.kobj,&THIS_MODULE->modinfo_attrs[i].attr)!=0)
break;
}
}

how do i create recursive directories for the following requirement in c?

i expect to have more than one million files with unique names. I have been told that if i put all this files in one or two directories the search speed for these files will be extremely slow. So i have come up with the following directory architecture.
I want the directory structure to branch out with 10 sub directories and the level of the sub directories will be 4. because the file names are guaranteed to be unique i want to use these file names to make hashes which can be used to put the file in a directory and also later to find it. The random hash values will make a directory to have,approximately, 1,000 files.
so if F is root directory then inserting or searching for a file will have to go through these steps:
I want to use numbers from 0-9 as directory names
h=hash(filename)
sprintf(filepath,"f//%d//%d//%d//%d//.txt",h%10,h%10,h%10,h%10);
HOW DO I CREATE THESE DIRECTORIES?
EDIT:
All the files are text files.
The program will be distributed to many people in order to collect information for a research. So tt is important that these files are created like this.
EDIT:
i created the following code to implement perreal's pseudo code. It compiles to success but gives the run time error given at the end.
error occurs at the sprintf() line.
#include<iostream>
#include<stdlib.h>
#include<windows.h>
void make_dir(int depth, char *dir) {
if (depth < 4) {
if (! CreateDirectoryA (dir,NULL))
for (int i = 0; i < 10; i++) {
sprintf(dir,"\\%d",i);
char *sdir=NULL ;
strcpy(sdir,dir);
CreateDirectoryA(sdir,NULL);
make_dir(depth + 1, sdir);
}
}
}
int main()
{
make_dir(0,"dir");
return 1;
}
Unhandled exception at 0x5b9c1cee (msvcr100d.dll) in mkdir.exe:
0xC0000005: Access violation writing location 0x00be5898.
Kind of pseudo code, but can be done like this:
void make_dir(int depth, char *dir) {
if (depth < 4) {
CreateDirectoryA (dir,NULL);
for (int i = 0; i < 10; i++) {
char *sdir= (char*)malloc(strlen(dir+10)); // XXX 10?
strcpy(sdir, dir);
sprintf(sdir + strlen(sdir), "\\%d", i);
printf("%s\n", sdir);
//CreateDirectoryA(sdir,NULL);
make_dir(depth + 1, sdir);
free(sdir);
}
}
}
}
And to call make_dir(0, rootdir);
Do not do this: sprintf(dir,"\%d",i);
dir is a const, read only string in your example.
You're likely to run off the end of the string, corrupting things that follow it in memory.
Do not copy to sdir without allocating memory first.
sdir = (char *)malloc( strlen( dir ) + 1 );
At the end of the function make_dir, you will have to call free( sdir ); so you do not leak memory.

Recursive CreateDirectory

I found many examples of CreatingDirectory recursively, but not the one I was looking for.
here is the spec
Given input
\\server\share\aa\bb\cc
c:\aa\bb\cc
USING helper API
CreateDirectory (char * path)
returns true, if successful
else
FALSE
Condition: There should not be any parsing to distinguish if the path is Local or Server share.
Write a routine in C, or C++
I think it's quite easier... here a version that works in every Windows version:
unsigned int pos = 0;
do
{
pos = path.find_first_of("\\/", pos + 1);
CreateDirectory(path.substr(0, pos).c_str(), NULL);
} while (pos != std::string::npos);
Unicode:
pos = path.find_first_of(L"\\/", pos + 1);
Regards,
This might be exactly what you want.
It doesn't try to do any parsing to distinguish if the path is Local or Server share.
bool TryCreateDirectory(char *path){
char *p;
bool b;
if(
!(b=CreateDirectory(path))
&&
!(b=NULL==(p=strrchr(path, '\\')))
){
size_t i;
(p=strncpy((char *)malloc(1+i), path, i=p-path))[i]='\0';
b=TryCreateDirectory(p);
free(p);
b=b?CreateDirectory(path):false;
}
return b;
}
The algorithm is quite simple, just pass the string of higher level directory recursively while creation of current level of directory fails until one success or there is no more higher level. When the inner call returns with succeed, create the current. This method do not parse to determ the local or server it self, it's according to the CreateDirectory.
In WINAPI, CreateDirectory will never allows you to create "c:" or "\" when the path reaches that level, the method soon falls in to calling it self with path="" and this fails, too. It's the reason why Microsoft defines file sharing naming rule like this, for compatibility of DOS path rule and simplify the coding effort.
Totally hackish and insecure and nothing you'd ever actually want to do in production code, but...
Warning: here be code that was typed in a browser:
int createDirectory(const char * path) {
char * buffer = malloc((strlen(path) + 10) * sizeof(char));
sprintf(buffer, "mkdir -p %s", path);
int result = system(buffer);
free(buffer);
return result;
}
How about using MakeSureDirectoryPathExists() ?
Just walk through each directory level in the path starting from the root, attempting to create the next level.
If any of the CreateDirectory calls fail then you can exit early, you're successful if you get to the end of the path without a failure.
This is assuming that calling CreateDirectory on a path that already exists has no ill effects.
The requirement of not parsing the pathname for server names is interesting, as it seems to concede that parsing for / is required.
Perhaps the idea is to avoid building in hackish expressions for potentially complex syntax for hosts and mount points, which can have on some systems elaborate credentials encoded.
If it's homework, I may be giving away the algorithm you are supposed to think up, but it occurs to me that one way to meet those requirements is to start trying by attempting to mkdir the full pathname. If it fails, trim off the last directory and try again, if that fails, trim off another and try again... Eventually you should reach a root directory without needing to understand the server syntax, and then you will need to start adding pathname components back and making the subdirs one by one.
std::pair<bool, unsigned long> CreateDirectory(std::basic_string<_TCHAR> path)
{
_ASSERT(!path.empty());
typedef std::basic_string<_TCHAR> tstring;
tstring::size_type pos = 0;
while ((pos = path.find_first_of(_T("\\/"), pos + 1)) != tstring::npos)
{
::CreateDirectory(path.substr(0, pos + 1).c_str(), nullptr);
}
if ((pos = path.find_first_of(_T("\\/"), path.length() - 1)) == tstring::npos)
{
path.append(_T("\\"));
}
::CreateDirectory(path.c_str(), nullptr);
return std::make_pair(
::GetFileAttributes(path.c_str()) != INVALID_FILE_ATTRIBUTES,
::GetLastError()
);
}
void createFolders(const std::string &s, char delim) {
std::stringstream ss(s);
std::string item;
char combinedName[50]={'\0'};
while (std::getline(ss, item, delim)) {
sprintf(combinedName,"%s%s%c",combinedName,item.c_str(),delim);
cout<<combinedName<<endl;
struct stat st = {0};
if (stat(combinedName,&st)==-1)
{
#if REDHAT
mkdir(combinedName,0777);
#else
CreateDirectory(combinedName,NULL);
#endif
}
}
}

Resources