How to protect desktop application from being installed on different computer when using SerialNumberTemplate property for accepting keys [closed] - licensing

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
How to protect windows desktop application from being installed on different computer when using SerialNumberTemplate property for defining key pattern?
How Can I bind particular key for only one user ,so that he can use it only once i.e. for one time setup only

I think you need to Crack-proof/Anti-cracking logic. I should say you can't do it by your own; but you can use other simple ways:
You can search for codes/libraries about Serial Number Generators based-on the Hardware serial Numbers.
For example, check the following Internet Address:
.NET Reactor and IntelliLock
As you know, Hardware Serial Numbers are UNIQUE for every Hardware; so, with this method, the used code will generate the Unique and Specific Serial Number based-on the Hardware Serial Numbers in each system.
But I must say something, working on the Crack-proof/Anti-cracking logic is very complicated, if you want to create your own logic as well. and I should say, using existing shared codes/libraries in the Internet is at your own risk; because they are public and shared. So, may be their Crack-proof/Anti-cracking logic is found by some bodies, and has shared in the Internet.
if you want to create your own logic, you need to know about Cracking and Anti-Cracking Methods. Today, developers use hybrid methods by combining several methods together. In the modern programs like MS Office, MS Windows OSes or etc. developers use many logic for preventing from cracking the software such as:
Checking by the Internet and adding some codes to the updates for finding cracked parts of the program
Checking the Hardware Serial Numbers for generating the UNIQUE Serial Number in a very secure and complex method
Creating many JUNK Threads for hide and difficult to find the Data Flow Process of the License Information by Data Flow Checker, Disassembler or Debugger applications like SoftICE
Creating some Licensing Services for run and check the running processes for detecting cracking software or check Licensing and other related things to prevent cracking
And many other methods
A strong and powerful Cracker knows Assembly programing language, using Disassembler or Debugger applications, the Data Flow Process logic and the Internet connecting methods in the application programs and Targeted OSes, and so on as well.

Related

Is it possible to use Erlang and C language hybrid programming with C at high proportion? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
My new work will use language Elixir.
I'm a fresh at this point also Erlang environment. With some research, I'm find the platform has some problem with performance for CPU intensive computing.
Does it possiable use C replace Erlang in above situation with nif even if C code maybe at high proportion? Or there are some limits that we can't do this.
The easiest and safest way to run computationally intensive C code from Erlang is to write the C code as a standalone executable and connect it to Erlang through a port. See http://erlang.org/doc/tutorial/c_port.html for details.
Note the warning about long-running NIFs in the documentation:
As mentioned in the warning text at the beginning of this manual page, it is of vital importance that a native function returns relatively fast. It is difficult to give an exact maximum amount of time that a native function is allowed to work, but usually a well-behaving native function is to return to its caller within 1 millisecond. This can be achieved using different approaches. If you have full control over the code to execute in the native function, the best approach is to divide the work into multiple chunks of work and call the native function multiple times. This is, however, not always possible, for example when calling third-party libraries.
The enif_consume_timeslice() function can be used to inform the runtime system about the length of the NIF call. It is typically always to be used unless the NIF executes very fast.
The documentation goes on to suggest three ways around this, "yielding NIF", "threaded NIF" and "dirty NIF".
There is also a third way, You can run your c code as a stand alone Erlang node that communicates via the Erlang internode procotol.
See: http://erlang.org/doc/tutorial/cnode.html

Maintaining a single codebase between embedded and non-embedded code [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm working on a robotics research project that involves programming a microcontroller. I'd like to be able to decouple testing the software from testing the hardware to the greatest extent possible. This is both to increase development speed and also so that I can more easily unit test/simulate the code before putting it on the robot. So for example, I might write a "MyRobot" library. Then I could include this library both in the embedded code and my non-embedded simulation/testing code. At runtime, I'd provide function pointers that would either read (in the embedded case) or simulate (in the simulation) sensor data and feed that into the library.
So it seems like all I would need to do is generate two libraries at compile time: one for the embedded code, and one for the non-embedded code.
My question is whether or not this is feasible/if there are better ways to do it/if there are any gotcha's I should watch out for.
Thanks in advance!
This is a common situation in embedded systems development, and your approach of creating two libraries is usually the recommended solution. It's considered a best practice to decouple the low-level hardware from the software in embedded systems firmware.
The library you mentioned is commonly known as a "Hardware Abstraction Layer", or HAL. The API (application programming interface) for the HAL can be provided in a single header file named something like hal.h. Every source module in your software that needs to access the hardware would have the following line at the top of the source file:
#include "hal.h"
Benefits of designing your system like this include:
Modularity. If you need to make a timing change on, say a UART or SPI interface that reads a sensor, you only need to change the HAL library even though there may be multiple locations in your code that read that sensor.
Portability. If you later need to migrate your project to a different microcontroller, only the HAL layer would need to change.
Encapsulation. The details of the hardware are hidden within the HAL layer, which allows your other software to operate at a higher level of abstraction. If you are using a device library provided by the microcontroller manufacturer that provides the addresses of registers, I/O ports, etc., you can encapsulate the references to this library within your HAL library, so that your application code need have no knowledge of it.
Testability. This was the primary focus of your question. You can write a special version of the HAL layer that can be run on a different platform (such as Windows, for example) for testing of your application software. This special version would not need to include the device library provided by the microcontroller manufacturer, because when you are running in the test environment, the microcontroller doesn't exist, so its registers and I/O ports don't need to be accessed by your software.
For your two scenarios, as you suggested, you would create two versions of the HAL library: the standard version that contains the code that runs on your embedded hardware, and the simulation version that simulates the hardware for the purpose of testing your software in a controlled manner. You might name the standard library hal.lib (perhaps with a different extension depending on your development environment), and the simulation library hal_simulated.lib. Both would have the same interface, as described in hal.h. That is, both libraries would contain all functions declared in hal.h, such as void halInit(), int halReadProximitySensor(), etc.
Assuming your IDE supports Release and Debug configurations, you could create a third configuration for your software testing that is named SW_Test. This configuration would be a duplicate of your Debug configuration, except that the hal_simulated.lib would be linked into the project instead of the standard hal.lib.
See also
Hardware Abstraction (Wikipedia)
Considering you are stuck with C that is not an object oriented language, I would go to a single library with some internal logic or even #ifdefs (if performance is a must), something like:
bool turnOn()
{
#ifdef DEV
printf ("Turned On\n");
return true;
#endif
#ifdef PROD
return robot_command_turnOn();
#endif
}
or
bool turnOn()
{
if (inProduction())
return robot_command_turnOn();
printf ("Turned On\n");
return true;
}
Or even better:
bool turnOn()
{
printf ("Turned On\n");
if (!inProduction())
return true;
return robot_command_turnOn();
}
There are several ways to do it. In my view I wouldn´t go for two libraries as I would need to keep in sync function signatures and versions and this can be a mess.
The secret is to build an interface library to your hardware (robot in case) and develop the library convering all possible interactions so that will keep a level of abstraction from the hardware layer. That interface library can control if you are testing the unit (using a function like inProduction() above) and send the commands to hardware if allowed.
Using an objected oriented language like C++ you have patterns to help you out: ie: interface pattern, factory pattern, etc.

How to implement a loader using a program? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I know that a loader loads a program into memory. But how can I implement it using a program? Using assembly or C. This might be very useful. Or atleat a reference.
Maybe you already understand this, not sure. A program loader at a high level simply reads/downloads/accepts the program, parses the file format if required. Places the program in memory, and jumps/branches to it.
Now if you get more specific, say a bootloader for a processor you generally dont have a file system yet or such things so maybe you can only accept programs that are already on the flash, one of your main use cases, or allow developers to download test versions, destined to be the program on the flash, xmodem, ymodem, or other protocols. Maybe if there is ethernet then that way or usb if available and makes sense or removable media (sd cards, etc). At the end of the day you still support some type of format be it just the raw memory image of the program or some other formats (intel hex, motorola srecord, maybe elf, etc).
An operating system has a lot more work to do, because take windows or linux or mac right now, write a simple application that reads and parses a simple program, read that program into your applications memory space or malloc some, whatever, then try to branch to it. The operating system stops you, there are ways around this, but that is not the point, you are an application you are not the operating system. But if you were the operating system loader, then you simply have more permissions, being the operating system you have designed what your file format is, what the agreed entry point address is, what the system interface is for applications making calls, etc. Programs have to conform to your rules, you would then read the binary, parse it (perhaps you only support .elf file formats for example), allocate memory for the program per your rules and the programs desired allocation of resources (ideally, initially, part of the file format), per your operating systems rules you setup the virtual address space and point it where the program has been loaded, and then branch to the program changing from super user to user mode on the way.
Your question is extremely vague though, cant understand if you understand the basics and want detail (an application is not a loader on an os with protection, so simply go read the source for linux or bsd, etc), or dont understand the basics (make a little bootloader for a microcontroller or use an instruction set simulator if you dont want to buy a microcontroller).
I feel as if the best manner of doing what I think you are trying to do is fork a process off, and create a process running within it? This is, if it's what you're asking best done with the unistd.h library, in both C and C++, and if you want to get a bit more direct the PThreads library. However if you don't at the moment know how these things are called, I recommend heavy reading before you mistakenly create a fork bomb, and crash your system.Look into the openpub documentation if needed. However I heavily recommend cleaning up this question, and I also feel that it's been asked a bit often on this site as well.

which services are provided by the operating system to execute the C program [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Regarding the execution of the c program, i am aware that compiler converts the code into machine language and then it is executed.But i would like to know which services are provided by the operating system to accomplish that. say iam using fopen function, please explain me how the operating system handles it, ie. reading the file from hard disk to loading into memory...etc . for all those operations,which system calls are internally called?how the explicit functions like fopen,printf are converted into system calls?
If it is possible to view the internal system calls in context to c programming,please let me know the path to be followed to see them?
Languages typically have their own APIs as part of their run-time support (e.g. fopen() in C's standard library). These are part of the language and not strictly part of the OS itself.
The language's run-time uses the OS's lower level APIs. For example, fopen() might use the kernel API's open() function (Linux); but then it might be a createfile() function in a DLL and not something in the kernel at all (Windows). In some cases, it's nothing like that and more like a message sent to a different process (common for micro-kernels).
Regardless of where it ends up (and how), it probably finds its way to some sort of "virtual file system" layer, and depending on whether it's in the VFS's caches it may or may not get forwarded from there to code responsible for handling a file system, which may or may not forward it to some sort of storage device driver (e.g. a USB flash device driver), which in turn might forward it to another device driver (e.g. a USB controller driver).
Mostly, it can be very different for different OSs, so there is no single answer that's correct for all of them.

How to combine multiple small C programs together? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
One of the unix philosophies is to write small software components and then connect them together, like uzbl or git forexample, but I ask how do you combine them together to make one big application? If I were interested in gluing C programs together, do I write another program that calles them with the system() call to execute the system to give the desired behavour? What are the good practices? Where do I look for more indepth detail in this area.
For example, I'm trying to develope a program of my own and would like to compartmentalize the different components of the program together. The web browser Uzbl or version controlling software git for example, how do they bind the different binaries together to make one?
Depends on what level you're viewing this from. If you want to combine several pre-built programs at the command line, you can use a pipe like with Unix, e.g. dir | sort.
If you're developing and want to reuse existing code, you can link the existing functionality to your application as libraries, or simply reuse existing classes.
You mentioned git, which is known for originally being a collection of small tools each performing a relatively small operation on the repository, and a set of scripts (shell and Perl) that use the tools. For example, you can take a look at the code of git pull (which is actually a shell script) and see how it calls different git programs that most git users don't know about.
Generally, if you want to write a part of your program as a filter, let it simply read the input from stdin (using fread, fgets, fscanf, etc) and write the output to stdout (fprintf, fwrite, etc). Then you can call your filter in a shell script using the pipe.
Another way of combining programs is via bidirectional interprocess communication, that is, not via pipe in a shell script but using e.g. sockets. You can split the program to two parts, a server and a client, which communicate with each other but have separate objectives. For example the X system and FreeCiv are written this way.
There are programs which aren't easily decomposable to multiple smaller programs and filters. In that case, it's usually best to decompose the program to libraries, which is also part of Unix philosophy as the libraries can also be reused by other programs.
I'd also recommend looking at The Art of Unix Programming, which goes into more detail on software engineering and the Unix philosophy.

Resources