CLI vs Pure C/C++ Library for a program? [closed] - c

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Background / Context: I am developing a Linux NAS Server (Like FreeNAS or Rockstor) using Golang, the particular features will be a JSON-REST API so that you can interact with LVM2, shares, packages, etc.
Question: With respect to security, performance, and development time, what are the advantages / disadvantages / best practices of implmenting spawned processes or using a native library for certain features for a program?
Example: For my particular use case, the NAS management system will be using LVM2 to manage volumes. However you can use the CLI to manipulate volumes or you can attempt to use the LVM2 native C API and merge it with Golangs cgo package.
EDIT: Rephrased my question / information.

There are two things that may make using exec in the different variants a nogo: security and speed.
Security: If you shell out with system() or friends, you must be absolutely certain that you don't include any strings in the command that may do funny stuff with your command line. It's the same basic problem as SQL code injection, just at a much lower and even more disastrous layer (obligatory XKCD, just replace "'); DROP TABLE Students;--" with valid sh code along the lines of '"; echo "pwnd', well, you get the idea).
Speed: When you shell out to an existing program, you create a new process, and that may be the performance hit you cannot tolerate. It's perfectly ok if the task for which you shell out takes more than a few milliseconds (process creation is somewhere in the range of a millisecond on linux), but if you want more than a thousand calls per second, you definitely need to avoid this overhead.
If these two points are taken care of or proven to be non-issues, then it's perfectly ok to shell out to other processes.

Related

C# winform application security Vulnerability Testing Tools [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Does anyone have any recommendations for good vulnerability testing software for c# window forms (not .net) applications.
Preferably one that can also test with a mysql server or sql server connection.
There is no tool that is going to match a good code reviewer or penetration tester. But a few tips to get you aimed in the right direction:
Static analysis tools like HP Fortify, IBM AppScan, and CheckMarx do a wonderful job at finding security issues with code. But you really need an experienced code reviewer to get the most out of them. Also, they are not cheap! These tools operate by scanning code, and the main requirement is to provide to the tool everything you need to build your software (at least this is the case for Fortify and AppScan, not sure if the same requirement holds for CheckMarx).
IAST tools such as Contrast are also not cheap. However at least in the case of Contrast, they are specifically trying to make it more developer-friendly. IAST tools work by hooking into your binary in your test environment and looking under the hood for bad things that happen.
Dynamic analysis tools such as OWASP ZAP (free) and Burp (not free, but affordable) can run automated scans in your environment, but if you lack experience with these, then your value is limited. These tools work by scanning in a test environment and sending malicious payloads to see how the server responds. A lot of effort is being put to make ZAP work in continuous integration build environments.
All of these should work for the technologies that you are using.

system libraries for tracing? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Here's the problem:
I have a file on a server, which needs to have a certain set of permissions, say 644. From time to time some process changes the permissions to 600, and I don't know which. How do I catch it in the act?
For this sort of problems and other similar ones, I am looking for a set of system libraries (I think) which I can insert in front of the normal ones, which do pretty much the same as the normal libraries, except that they in some way log the calls made along with timestamp and the name of the perpetrator. Are there any tools - libraries, whatever - that provide this?
First of all you need to identify by which program this is being changed. To do that you could use SElinux, some linux distribution (if not all) have it's setting in the /etc/selinux where you can define rules for what's allowed. Violations for the rules will be denied, but also failed attempts will be logged (so this both gets rid of the symptoms as well as point to the cause).
For more information about SElinux I'd suggest you ask on Unix/Linux exchange.
Next step if it's your own program would probably be to run it under gdb and put a conditional breakpoint at the chmod function. You also have functionality in gdb just to do a printout from the breakpoint and continue which would allow the program to run almost normally, but get printouts for every file that is chmoded.
This is the kind of thing auditing was designed to do.
See How can I audit all chmod and chgrp commands? for an example that probably qualifies as a duplicate of your question.

How to create distributed file system [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Just for self education I decided to implement "hello world" distributed file system. The simplest one. And decide to read about theory under this subject.
But... when I asking google about this it shows answers like "how to configure hdfs" or "how to set distributed fs on windows" what is not what I interested in...
Could someone please point me on some good articles or books on this subject.
Thanks a lot!
Well, if you really decided to implement such a file system, you must start with distributed systems. I recommend reading the Tanenbaum reference book http://www.distributed-systems.net/index.php?id=distributed-systems-principles-and-paradigms
Careful, the subject is really complex and distributed systems are all but simple to implement.
If you want to have a look to some already implemented distributed file systems, you may have a look to GFS/GFS2 (from RedHat). You may also have a look to ocfs2 from oracle.
You may also have a look to gluster https://fr.wikipedia.org/wiki/GlusterFS
You may also be able to find some white papers on the google file system (when it was still a university work).
The main problem of such distributed system is the failure detection (detect when a node crashes while writing on the file system => need to make sure there are no corruptions). There are multiple strategy, one may be to implement a journal which is protected by a distributed lock.
Another great (classical) problem is the 'split brain' problem, when the cluster is split in two groups because of a network failure (imagine a switch that is broken). Both groups 'think' that the other one is dead (they cannot communicate with it) but there is no way to make sure that the distant group is not writing data causing the data to diverge.
Hope you find what you want with all this.
Edit:
Now GFS is deprecated, redhat is using and developing 'Ceph'

Generator of "mind map" from files.c [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I started a while ago to learn the C language, and has spent several hours I search THE miracle software.
I am looking for software that import sources of software in C (files.c) and generates a "mind map" of the code with all files, functions, variables, etc ...
Do you know if it exists? It'll help me a lot to understand the architecture of complex software.
Thank you very much for all your answers.
Take a look at the "call graph". This sort of visualization should get you started.
As the comment suggests, Doxygen is a good open-source tool. Take a look at some output here. Doxygen is straight-forward to configure for call-graph generation under *nix. It's a little more complex for Windows. First, check out this SO post: how to get doxygen to produce call & caller graphs for c functions. Doxygen's HTML output provides a number of nice cross-referencing features (files, variables, structs, etc.) in addition to caller/callee graphs.
On the commercial side, Understand for C/C++ has first-rate visualization features. Google "c call graph diagram" for other commercial and open-source options.
Finally, there are some older SO posts, like this one Tools to get a pictorial function call graph of code. Take a look at it.
Look into the program ctags. It is an indexer of names and functions based on the structure of the programming language.
It is quite mature, and has integration with a number of other tools. I use it with an older (but very nice) text editor called vi, but it can be used independently from the command line.
It does not generate a graphical view of the connections. However, in my estimation there are probably too many connections in most C programs to display visually without creating a large amount of information overload.
This answer differs from Throwback's answer in some interesting ways. A call graph can mean a few things. One thing it can mean is the path a running program took through a section of code, and another is the combination of all paths a running program might take through the code, and another is the combination of all paths in the code (whether they can be reached or not).
Your needs will drive which tool you should use.

Looking for a cross platform small footprint database [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I have the following scenario. I need a db to store XML messages that have been created by a reader. I then want to use a transport (wcf) to read the db external to the populating app and send the messages to a central db Generally the db needs to run on mono, and windows.
I did look at sqlite3, and it seemed to fit all my requirements, but i'm reading its not so good on multi process access and t's moving away from my sweet spot, these last couple of days.
Thanks.
Have you considered just using XML to store the data? It doesn't get any more portable than that and will work fine as long your client-side storage needs are simple. E.g. not a large amount of many domain objects that need to be stored.
Additionally using an XML data store solves a lot of setup and installation headaches. You simply reference a file (or files) relative to your executable. You don't need to worry about installing db engines for a variety of platforms and then worry about upgrading.
WOuld it be feasible to give each process their own sqlite3 database? They all ultimately use the central database anyway, right?
Have a look at Firebird.
You can use it as an embedded engine just like SQLite, but it can scale to a full blown server as well.
The only drawback is, that the documentation is a mess

Resources