What is a good project structure in C - c

I have been working in Java/J2ee projects, in which I follow the Maven structure.
I want to develop [say a command line interpreter in linux {ubuntu}] in C.
I never develop projects in C. I want to know what project structure I should follow.

There is no one "standard" for C project in this aspect. Certainly if your project is small, then frequently everything will be placed in a single directory.
You can try to download some popular open-source C projects and take a look at their code.
On a lower level, code should be modular. Each module (which in C is usually manifested in a data structure with a set of functions to act upon it) has its own pair of .h and .c files, with the .h file being the public interface visible to the clients of the module, and the .c file being the private implementation.

As Eli Bendersky sayd, it strictly depends on how complex is your project.
The standard suggests to split as much as possible into libraries. The point is that you may want to reuse your libraries elsewhere. By example this is a project of mine:
├── AUTHORS
├── COPYING
├── ChangeLog
├── Makefile.am
├── NEWS
├── README
├── configure.ac
├── libs
│ ├── featsel
│ │ ├── Makefile.am
│ │ ├── commander.c
│ │ ├── featsel
│ │ │ ├── commander.h
│ │ │ ├── feattuple.h
│ │ │ └── types.h
│ │ ├── featsel.h
│ │ ├── feattuple.c
│ │ ├── headers
│ │ │ └── datatypes.h
│ │ └── tests
│ │ ├── Makefile.am
│ │ └── test00.c
│ ├── mbox
│ │ ├── Makefile.am
│ │ ├── README
│ │ ├── control.c
│ │ ├── error.c
│ │ ├── headers
│ │ │ ├── datatypes.h
│ │ │ ├── mail.h
│ │ │ ├── parse.h
│ │ │ ├── split.h
│ │ │ └── strings.h
│ │ ├── interface.c
│ │ ├── mail.c
│ │ ├── mbox
│ │ │ ├── descriptor.h
│ │ │ ├── error.h
│ │ │ ├── mail.h
│ │ │ └── types.h
│ │ ├── mbox.h
│ │ ├── parse.c
│ │ ├── split.c
│ │ └── strings.c
│ └── thread_queue
│ ├── Makefile.am
│ ├── thrdqueue.c
│ └── thrdqueue.h
├── reconf
└── src
├── Makefile.am
└── main.c
I personally prefer to put all libraries into a libs directory. Every library except trivial ones has its own private header directory and exports a public header by means of a directory having the same name of the library.
The source file of the program itself is placed in the src directory.

A suggestion:
/project
README
LICENCE
Makefile
# mabe configure.am Makefile.am etc.
# see http://en.wikipedia.org/wiki/GNU_build_system
/src
Makefile
a.h
a.c
b.h
b.c
/subunit
x.h
x.c
y.h
y.c
# each file.c has a header file.h but not necessarily
...
Look at Nginx on github and browse the project structure online.

Separate functionalities in modules: .c files with implementation details/definitions paired with .h files with declarations.
Try not to pollute namespaces by using static for functions and a common module prefix for external symbols.
Create libraries if you have functionalities that can be encapsulated and reused.

You can refer to the OpenSSL Project structure. It's a famous open source and has a good project structure.

Related

VSCode: How to define unique includePaths for each core (sub-directories) in STM32h7 dual-core project within same git repo?

I have a git repository for a STM32 H7 dual-core project. Opening the repo-level folder my structure looks like the following:
PARENT DIRECTORY (STM32H755)
├── .git
├── .vscode
│ └── c_cpp_properties.json
│
├── CM4
│ ├── Core
│ │ ├── Inc
│ │ │ └── .h files for CM4 only
│ │ └── Src
│ │ └── .c files for CM4 only
│ └── ...
│
├── CM7
│ ├── Core
│ │ ├── Inc
│ │ │ └── .h files for CM7 only
│ │ └── Src
│ │ └── .c files for CM7 only
│ └── ...
│
├── Common
│ ├── Core
│ │ ├── Inc
│ │ │ └── .h files for BOTH CM4 and CM7
│ │ └── Src
│ │ └── .c files for BOTH CM4 and CM7
│ └── ...
│
I need to specify the includePath for each of the CM4 and CM7 cores independently, including that specific core's h files as well as the common h files.
How is this achieved?
The default path within c_cpp_properties.json file:
"includePath": [
"${workspaceFolder}/**",
],
causes issues for similarly named files between the CM4 and CM7 (ie. main.h).
Conversely, using a core specific paths, such as the following for a CM4:
"includePath": [
"${workspaceFolder}/CM4/**",
"${workspaceFolder}/Common/**"
],
causes issues for the other core's files (ie. CM7's main.h).

How to structure and name files in React app?

I would like to ask for your opinion on the file structure and file names.
I am a beginning React programmer and I am interested in the opinion of experienced developers.
Is there anything you don't really like about the structure and the names themselves?
└── src
├── components
│ ├── layout
│ │ └── ThemeSwitcher.tsx
│ ├── movieDetail
│ │ ├── BackToHomepage.tsx
│ │ ├── FavoriteMovieButton.tsx
│ │ ├── MovieLoader.tsx
│ │ └── MovieTable.tsx
│ └── movieList
│ │ ├── MoviesList.tsx
│ │ ├── PaginationBar.tsx
│ │ └── SearchBar.tsx
├── img
│ ├── favorite-active.png
│ └── favorite-inactive.png
├── layout
│ ├── Header.tsx
│ └── Main.tsx
├── pages
│ ├── Detail.tsx
│ ├── Favorites.tsx
│ └── Homepage.tsx
├── redux
│ ├── movieDetail
│ │ ├── movieDetailSaga.tsx
│ │ ├── movieDetailSlice.tsx
│ │ └── movieDetailTypes.tsx
│ └── movieList
│ │ ├── moviesListSaga.tsx
│ │ ├── moviesListSlice.tsx
│ │ └── moviesListTypes.tsx
│ ├── rootSaga.tsx
│ └── store.tsx
├── styles
│ ├── abstracts
│ │ ├── mixin.scss
│ │ └── variables.scss
│ ├── base
│ │ ├── reset.scss
│ │ └── typography.scss
│ ├── components
│ │ ├── layout
│ │ │ └── ThemeSwitcher.scss
│ │ ├── movieDetail
│ │ │ ├── BackToHomepage.scss
│ │ │ ├── FavoriteMovieButton.scss
│ │ │ ├── MovieLoader.scss
│ │ │ └── MovieTable.scss
│ │ └── movieList
│ │ │ ├── MoviesList.scss
│ │ │ ├── PaginationBar.scss
│ │ │ └── SearchBar.scss
│ ├── pages
│ │ ├── Detail.scss
│ │ ├── Favorites.scss
│ │ └── Homepage.scss
│ └── main.scss
├── .d.ts
├── App.tsx
└── index.tsx
The following technologies were used in this particular project:
React create app
Typescript
Redux Toolkit
Redux Saga
Sass
Thank you for every suggestion.
https://reactjs.org/docs/faq-structure.html
"If you’re just starting a project, don’t spend more than five minutes on choosing a file structure. Pick any of the above approaches (or come up with your own) and start writing code! You’ll likely want to rethink it anyway after you’ve written some real code."
Your structure is fine. Like it's already said, do not overthink it.
Despite that, as you asked, I personally like to put components related to same feature together as you did.

How to watch all .js files with npx, and have output file be one directory above original react .js file?

I am trying to create a web React app that has multiple pages. I was able to configure babel so that it watches a single directory and output the vanilla Javascript files one directory above. The problem is that I have multiple folders with React files, and I do not want to manually have a watch command for each directory. I was wondering if I was able to do this with one command that watches all js/react/ directories and outputs to the respective js/ directories.
//node_modules have been excluded for brevity
.
├── app.js
├── environment.js
├── package-lock.json
├── package.json
├── public
│ ├── colab
│ │ ├── create
│ │ │ ├── index.html
│ │ │ ├── main.js
│ │ │ ├── project.js
│ │ │ └── style.css
│ │ ├── home
│ │ │ ├── index.html
│ │ │ ├── js
│ │ │ │ └── react
│ │ │ │ ├── main.js
│ │ │ │ └── project.js
│ │ │ ├── style.css
│ │ │ └── test.json
│ │ └── navbar
│ └── homepage
│ ├── css
│ │ └── style.css
│ ├── images
│ │ ├── bixby.png
│ │ ├── fortniteskill.png
│ │ ├── turtle.png
│ │ └── turtle2.jpg
│ ├── index.html
│ └── js
│ └── react
└── main.js
└── routers
├── alexa.js
├── api.js
├── colab.js
└── home.js
npx babel --watch . --out-dir js/ --presets react-app/prod

Hadoop Restore from namenode and datanode files

I have the datanode, namenode and secondary namenode folder (with all data or information inside) from a a different hadoop installation.
My question is, how can you see whats in there or add it to your local HDFS to see the data or information.
There can be a way to restore it or something, but i cant find any information about it.
The folder tree is like this:
For Namenode & SecondaryNamenode:
data/dfs/name
├── current
│ ├── VERSION
│ ├── edits_0000000000000000001-0000000000000000007
│ ├── edits_0000000000000000008-0000000000000000015
│ ├── edits_0000000000000000016-0000000000000000022
│ ├── edits_0000000000000000023-0000000000000000029
│ ├── edits_0000000000000000030-0000000000000000030
│ ├── edits_0000000000000000031-0000000000000000031
│ ├── edits_inprogress_0000000000000000032
│ ├── fsimage_0000000000000000030
│ ├── fsimage_0000000000000000030.md5
│ ├── fsimage_0000000000000000031
│ ├── fsimage_0000000000000000031.md5
│ └── seen_txid
And for Datanode:
data/dfs/data/
├── current
│ ├── BP-1079595417-192.168.2.45-1412613236271
│ │ ├── current
│ │ │ ├── VERSION
│ │ │ ├── finalized
│ │ │ │ └── subdir0
│ │ │ │ └── subdir1
│ │ │ │ ├── blk_1073741825
│ │ │ │ └── blk_1073741825_1001.meta
│ │ │ │── lazyPersist
│ │ │ └── rbw
│ │ ├── dncp_block_verification.log.curr
│ │ ├── dncp_block_verification.log.prev
│ │ └── tmp
│ └── VERSION
Thanks in advance.
The standard solution for copying data between different Hadoop clusters is to run the DistCp command to execute a distributed copy of the desired files from source to destination.
Assuming that the other cluster is no longer running, and you only have these backup files, then it's possible to restore by copying the files that you have into the directories used by the new Hadoop cluster. These locations will be specified in configuration properties in hdfs-site.xml: dfs.namenode.name.dir for the NameNode (your data/dfs/name directory) and dfs.datanode.data.dir for the DataNode (your data/dfs/data directory).
Please note that this likely will only work if you run the same version of Hadoop from the prior deployment. Otherwise, there could be a compatibility problem. If you attempt to run an older version, then the NameNode will fail to start. If you attempt to run a newer version, then you may need to go through an upgrade process first by running hdfs namenode -upgrade.
One other option if you just need to look at the file system metadata is to use the Offline Image Viewer and Offline Edits Viewer commands. These commands can decode and browse the fsimage and edits files respectively.

How to publish the angularjs project from Webstorm

There is a lot out there on how WebStorm is great for editing angular, and their built-in template is quite good; however, i can't find anything on what do i do when i'm happy with the app.
Say i create the default template, how can i get a nice folder structure for this app so that i can ftp it to a remote server?
Better yet, is it possible to 'compile' (package) my entire angular dependancies and modules into one .js file and then for example have an index.html just reference that somehow?
you can inspire yourself from this excellent Yeoman generator.
I'm currently using it and when I'm done coding and testing with gulp serve I just release my app by compiling all my source code with the gulp build command.
Architecture is as follow and I personally found that's a really good one :
├── src/
│ ├── app/
│ │ ├── components/
│ │ │ └── navbar/
│ │ │ │ ├── navbar.controller.js
│ │ │ │ └── navbar.html
│ │ ├── main/
│ │ │ ├── main.controller.js
│ │ │ ├── main.controller.spec.js
│ │ │ └── main.html
│ │ └── index.js
│ │ └── index.(css|less|scss)
│ │ └── vendor.(css|less|scss)
│ ├── assets/
│ │ └── images/
│ ├── 404.html
│ ├── favico.ico
│ └── index.html
├── gulp/
├── e2e/
├── bower_components/
├── nodes_modules/
├── .bowerrc
├── .editorconfig
├── .gitignore
├── .jshintrc
├── bower.json
├── gulpfile.js
├── karma.conf.js
├── package.json
├── protractor.conf.js
Once you've run gulp build all your JS files are compiled to one unique index.js file and so do your style files. All vendor scripts included by bower are also compiled to one and stored in vendor.js file in your dist directory.

Resources