Changing the name of a program before building it to ubuntu - c

Recently I have been working with an open source simulator called Multi2Sim (M2S). I'm using the simulator to simulate heterogeneous processors to collect data that I'm using with my senior project that is aimed to test thew efficiency of different replacment policies with heterogeneous processors. The program is downloaded from the official site https://www.multi2sim.org.
After following the instruction, I successfully installed and ran the program on my Ubuntu 14.04 from the terminal by calling the "m2s" command. I used it to run the processors with LRU, FIFO, and Random cache replacement policies because they are the only policies M2S provides. The nature of my senior project demands that I use as many replacement policies as I can. I contacted a group of researchers who worked with M2S and were able to implement their own policies to the program. After sending an nice email inquiring about the process of implementing a policy to M2S, they nicely said that they can't tell me since their search is still ongoing.
After snooping around the M2S files that I downloaded before using "make" command, I found where the replacement policies are written in C in a file called "cache.c". I understand the overall mechanism of how the C program works now.
I don't have much knowledge with how
My question is: If I write the replacement policies to the the "cache.c" file, do I need to use the "make" command again in order for me to use them with the m2s command? or can I somehow implement the policies without having to remake the whole program? If I had to remake the program, is there a possible way to make it in a way that I will have the command in the terminal with a different name?
Thank you all in advance.

Yes. The whole point of make is that it will rebuild those parts of the program which need rebuilding (in fact it is designed to rebuild only those parts of the program that require rebuilding).
You might also want to consider putting the program under source control (git is worth learning) so that when you break it (as you inevitably will whilst learning) you can easily revert your mistakes, and see exactly what you changed.

Related

Prevent program from beeing run on other machines

I have a linux executable running on my Ubuntu Machine. I want to grant access to a user in order to execute the program, but I don't want this program to be copied.
I was thinking into making a simple crypter app that will decrypt the program at run time an run it from memory.
Is this feasable ?
You can
chmod -r program
The executable will still be runnable, but you cannot copy it.
I just tested that on Ubuntu 14.04 with a downloaded eclipse binary - it worked.
Please note that this will only work for binaries. It will not work for script files that need to be read and interpreted by a shell or interpreter.
It depends hard on the kind of attack a potential user would be able to do which relates typically to the commercial value of a successful attack.
First of all:
If a user have physical access to the storage there is no chance to protect anything from copying. Simply by booting with another OS make all system internal protections baseless. This will be true for the protected program and also all the programs which do some kind of obscure security features like decryption. You can boot a pc from usb or any other media. Forget about something like rights managements supported by the OS.
To hack the mac address on a pc is something that can be done in a few seconds. Load a kernel driver which register a pseudo network card and you will get any fake mac you want. Who will protect the pc for running with modified kernel?
The next is, that any kind of decryption will result in a memory map which hold the executable during runtime of the prog. Any low privileged hacker can get a copy of this memory and can create a application to get this image to run on any other machine.
As you can see on real world licensing models, the only chance is to use additional hardware which is fully secured like crypto usb sticks or other kind of ciphering agents. Another trick can be some kind of online key repository. But all this can not be done by simply implementing some crypto algos.
If you have a product which must be protected against illegal usage, you have to use a commercial protection.
Sorry that I can not see which is your intention from your question. If you only want to keep a simple application with no commercial value on one pc for a "friend" or you have to secure the income of your business :-)
If I'm understanding, you have a user logged in who needs to run program X but not copy program X?
One way, if this is a compiled executable, is to set execute only, but double check your suid_dumpable kernel setting.
If it's a script, or if you have configuration files that go along with it and those need protection, then the /etc/shadow pattern applies: users need to be able to read that file, but not copy it elsewhere for attack. For this pattern, the solution is to use a mediator program. Specifically, the program can temporarily increase its privilege to read the file, but cannot be coerced into providing access to anything in the file beyond what exactly is needed to run.
The accepted answer to this question explains nicely the variety of options. I personally like the sudo approach.

How does a desktop environment developer test his code?

I can't figure out how a desktop environment developer test his code. Usually, a C or C++ programmer compiles his code an then run it (i'm not one of those programmers, i'm a web one).
So, you usually build your gui application over some kind of desktop environment (windows, mac os x, gnome, kde, xfce...), sow how they build and test their gui desktop?
And if this is a silly question, how does a kernel programmer test his code? for example linux kernel? how do you know that what you just wrote works?
Testing is a very broad term there are many types (partial list):
unit tests - test small pieces of code. test that the code behaves as expected.
system tests - test whole application in real world scenarios.
performance tests - test what is the performance of the application or part of it.
GUI testing - test operation of GUI elements (not so common as automated tests)
static analysis - compiler warnings on steroids
dynamic analysis - at a minimum memory checks - check mem allocations and usage
coverage tests - check that all code is executed.
formal verification tests (very advanced) - e.g. check when assertions/assumptions are broken.
Kernel code can be debugged by connecting using a 2nd computer (host). Virtual machines uses the same principal and simplify the setup but can't always work as HW might not exist in the guest VM.
The kernel (all OSes) has trace mechanism(s) for printing progress/problems. In Linux the simple trace is shown via the dmesg command (prints a cyclic buffer).
User mode code can easily be stopped and debugged via a debugger.
Desktop Environments
Testing Desktop Environments in real world scenarios can be kind of annoying, so the developer would have to watch out for every small error he makes, if he doesn't, he will have a hard time developing the DE.
As stated by #egur, there are multiple ways of testing his code, the easiest one and most important (but cannot be used in some cases, of course), he can test that code in a simplified program.
A Desktop Environment consists of many parts, however, in your case, I suppose you're talking about the session manager (or window manager) which is responsible for almost everything. So, if he were to test that, he would simply exit his current DE and use the new executable. In case of some error, he can always keep a backup of the old executable or fix the faulty code using some commandline text editor (like vim, or nano).
Kernel
It's quite hard to test, some kernel developers just write some code and make sure it's fine and compiles, then simply let his users test (by ACK'ing the code, etc.), then it can be submitted into the kernel code. Reasoning behind that is, the developer may not have the hardware needed to test the code.
Right now, you can compile and run the kernel in usermode (UML) if you have heard of it, so some developers may go for it. However, some developers may also want to test it themselves (They of course back up the current kernel incase of a screw up).
The way to test a desktop application is related to the way of control the application unassisted or remotely.
The Cross Platform GUI Test Automation tool (I don't know if this project has a web) project helps you to chose the interfaces/libraries required to solve the problem.
In Linux[1] uses the accessibility libraries to control the application, you have Cobra[2] for Windows and PyATOM[3] for MacOS, but I don't know what kind of technology uses in this platforms.
http://ldtp.freedesktop.org/wiki/
https://github.com/ldtp/cobra
https://github.com/pyatom/pyatom

Build a makefile dependency / inheritance tree

Apologies if I explain this badly or am asking something bleeding obvious but I'm new to the Linux kernel and kinda in at the deep end...
We have an embedded-linux system which arrives with a (very badly documented) SDK containing hundreds of folders of stuff, most folders containing a rules.make, make, make.config or some variation of... and the root folder containing a "master" makefile & rules.make which mean that you can, from the root folder, type "make sysall" and it builds the entire package.
So far so good, but trying to debug it is a bit of an issue as the documentation will say something like:
"To get the kernel to output debug messages, just define #outputdebugmessagesplz"
OK, but some of these things are defined in the "master" make/rules file, some of these are defined in the child make/rules/config files, some are in .h files... and of course it's far nicer to turn these things on/off from the "top" make.config rather than modifying individual .h files and then having to remember to turn them off again.
So I thought it would be a useful thing to recursively build a tree, starting from the master "make" file and following everything it does, everything that gets defined or re-defined, etc... but there doesn't seem to be a simple way of doing that?
I assume I am missing a "make" option here that spits this info out, or a usage of the makefile/config that will just work?
Your situation is not uncommon. When developing for embedded systems, you might encounter many custom systems that solve a problem in a specific way. As people already commented on your question, there's no easy way to generate a dependency graph for your makefile structure/framework. But there are some things you can try, and I'll try to base my suggestions based on your situation. Since you've said:
Im new to the Linux kernel and kinda in at the deep end...
and
We have an embedded-linux system which arrives with a (very badly
documented) SDK containing hundreds of folders of stuff
You could try the following things:
If your SDK is provided by a third-party vendor, try contacting them and get some support.
SDK's usually provide an abstraction to work with several components without a deep understanding of how each one of them really works. Try to pinpoint your problem, like if you want to customize only the kernel configuration, you could find the linux kernel folder on your SDK (assuming your SDK is composed of a set of folders with things like libraries, source code of applications and stuff, one of them might be the kernel one) and run make menuconfig. This will open a ncurses-based configuration GUI that you can navigate and choose kernel options.
As people already pointed out, you can try to run make -n and check the output. You could also try to run make -p | less and inspect the output, but I don't recommend this since it will only print the data base (rules and variable values) that results from reading the makefiles. You would have to parse this output to find out what you want in it.
Basically, you should try to pinpoint what you want to customize and see how this interacts with your SDK. If it's the kernel, then working only with it will give you a starting point. The linux kernel has its own makefile-build system, named kbuild. You can find more information about it at the kernel's Documentation folder.
Besides that, trying to understand how makefiles work will help you if you have a complex makefile structure controlling several components. The following are good resources to learn about makefiles:
GNU Make official documentation
O'Reilly's Open Book "Managing Projects with GNU Make"
Also, before trying to build your own tool, you can check if there's an open source project that does what you want. A quick search on google gave me this:
makegrapher
Also, check this question and this one. You might find useful information from people that had the same problems as you did.
Hope it helps!

Virtual Instance of a C compiler on client browser

Is there a way I can create a virtual instance of gcc compiler on the client browser when the client opens my website??
By doing so, I can directly pass the user .c file as argument to my compiler instance and then execute it without having to make a POST call to server and execute the file there???
Originally I userstood your question to be targeting the native platform on which the browser is running:
Consider that Browsers may be running
on many different platforms,
operatinng systems and processor
architectures. Compiling C in the way
you describe might be technically
doable, but practically infeasible.
I was basing "practically infeasible" on the difficulty of supporting the plethora of widely used browser platforms.
Now I understand that you are thinking more on the lines of targeting a virtual environment. I'll amend practically infeasible to "a large amount of work".
If I understand your intent it is to run a C compiler which emits, shall we say, x86 compiled code and executes it. So to do that we need an emulation of the x86 environment in, say, JavaScript. What's more I think your intent is that the conmpiler itself execute in this environment, so that you can re-use gcc. So you'll need to emulate a file-system too. It's "obvious" that this could be done, but it really is a lot of work. Is it really worth it?
Competition code is small (I guess) even with lots of programmers the number of simultaneous compiles can't be so huge with a decent queued request system, a touch of Ajax, and a bit of back-end scaling how costly is it to support the expected population? What's the ratio of developers to back end systems?
Anyway, if I were to address this problem I'd go for taking the code for an opensource browser and melding in the gcc code. Produce a compiler/browser hybrid. Give that to the developers and tell them "Use this and get zippy compilation speeds, or use your own browser and join the queue."
You're not going to use GCC as it is written for this. AT BEST, you could accomplish something simalar if you had a compiler written in Java that targeted the JVM and could be ran as an applet. I don't know what it would take to get something like this working but, I suspect it would take a bit work to get it up and going. As far as I know nothing currently exist that does this.
Perhaps using a jsLinux in background? There the making process can run in the virtual machine. Communication could be done by extending the clipboard transfer, perhaps into multiple pipes...
I would be interested in javascript based gcc solutions, too.

Build C project automaticly

I'm working on a free software (bsd license) project with others. We're searching for a system that check out our source code (svn) and build it also as test it (unit tests with Check / other tools).
It should have a webbased interface and generate reports.
I hope we don't have to write such a system from null by ourselves...
You surely do not have to code this yourself - there are a lot of continuous integration systems which are able to check out source code from systems such as SVN and they are generally easy to extend with your own tasks, so running custom test scripts/programs should not be a problem.
While these CI systems are probably not written in C, this does not matter, since they just need to be able to access and compile your source code, for which they will use an external compiler anyways.
Just to list some of the well known CI tools:
CruiseControl
Hudson
TeamCity
You might also be interested in other questions on Stack Overflow tagged as continuous-integration. :)
I don't think that there's a buildsystem that is capable of doing all this tasks - but what about combining them?
SCons is a nice buildsystem that runs on every machine that has Python. It can even build directly from SVN. For automatic building you can try Buildbot.
Check out buildbot
My vote would be CruiseControl.NET, it has everything you are asking for. It is open source so the costs are low, and it has a very active user community on google groups to help you with your problems as you grow accustomed to it. Also, although .NET based, using MONO it is very nice on Linux and Mac build servers as well so you have everything covered.

Resources