Software licencing scheme [closed] - licensing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I've devised the following mechanism so as to license a software without direct connection to a server, it seems simple, yet I fail to find any serious flaw:
I plan to use asymetric crypto so as to send a message from 1 server (the licence server) to n clients (the n computers on which the software is installed)
The client sends (via mail, for example) some informations about the computer (MAC address, machine name, you name it)
On the licence server, these informations are encrypted using a well secured public (not so public) RSA key, this encrypted payload is the licence.
the encrypted licence is sent to the client
When the software is launched, it cheks for a licence file, it is able to ensure the payload was encrypted with the server key, using the corresponding RSA private key, shipped with each version of the software.
Once the licence decrypted, the software checks it's running on the same machine the licence was given to.
In my opinion, no one will be able to forge an encrypted payload without access to the Licence server RSA key.
Of course, the licence might be stolen, then the software launched in a virtual machine which mimics a genuine client machine, or the software might be disassembled so as to unplug the licence check.
But is this scheme good enough, or am I utterly naive in this regard?
Thanks

It's a decent scheme, although are you sure you want the client to have the private key and the server the public key? Unless you're generating one keypair per install, shouldn't it be the other way around?
The scheme is simple, but is it practical? If your goal is to prevent casual cracking of your application there are simpler solutions that are just as effective. And if your goal is to prevent crackers from running your application, chances are that (a) you won't succeed and (b) your program isn't important enough to merit such attention.
And why would someone seek to attack the cryptography part of the license scheme when simply hex-editing the binary to change the license checking code to NOPs will almost certainly work just fine?
I'd rethink the licensing strategy and its importance to your product and its success if I were you.

Related

How to create IPSEC association and Policies in C programming

I am building a client-server programming in c with communication between then over ESP/IPSEC.
Server on every new client connection, generate a random/unique CK/IK which was transmitted to client by some secure mechanism. I have create association and policies for different client in SPD and SAD of kernel using the PF_KEY socket programming. But this mechanism has some problem,
it start getting slow as soon as 80,000 association is created and my requirement is for 1000,000 association for load testing.
I have come to know that PF_KEY socket mechanism is old and outdated and insecure. This mechanism is KLIPS. There are two mechanism, KLIPS and NETKEY.
How to create association of IPSEC through NETKEY mechanism in c programming in user space?
I would strongly recommend that you use Wireguard over Ipsec, it is way better, approved by Linus Torvald and developed by very talented security researcher.
Sorry if it is slightly offtopic but you look like you are still exploring. Ipsec is from the 1990s, and it indeed brought a lot along the years, but Wireguard was started from scratch just few years ago with the merit of having a very short (and easily auditable) code, that was also optimized with the latest technologic breakthroughs.
You can find this beautiful project, its mirror, on Github, and it is mainly coded in C (as for the original project).
NB: Wireguard encrypt traffic through https (port 443), so it also allow users to go through all the kind of great firewalls built by dictatorships 2.0 around the world, as everyone needs to access https.
You can use the DaVICI plugin of StrongSwan.
https://github.com/strongswan/davici

Make REST API call from C without using libcurl [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I was trying to make REST calls from C; came across libcurl which was successful in doing that dynamically. But the code needs to be ported on to a Cortex M0 board, which need a lower footprint. Is there any workaround? All I need is to make a REST API call from C without any external library or overhead.
Well, how low do you want to go?
C doesn't know anything about REST, it doesn't know HTTP, not even TCP or something like a network interface. On bare metal, you'd start by reading the hardware specs of your network interface card and programming it (through ports, memory mapped registers, etc....) -- You'd have to understand ARP, IP, ICMP etc (and, of course, implement it), just to get a TCP connection on top of that.
Assuming there's an operating system in place, you'll be given some API, then the answer would depend on what this API allows. A typical level would be a "socket abstraction", like BSD sockets, which gives you functionality to establish a TCP connection. So, "all" you'd have to do is implement a HTTP client on top of that.
Unfortunately, HTTP itself is a complex protocol. You'd have to implement all the requests you need, with Content-Types, transfer encodings, etc and also handle all possible server responses appropriately. These are a lot. Bring content negotiation to the table, partial responses, etc ... it's "endless" work. That's exactly the reason there are libraries like curl that already implement all of this for you.
So, sorry to say that, but there's no simple answer possible giving you what you want here. If you want to get the job done, use a library. Maybe you can find something smaller than libcurl.
What you can do is to compile the library yourself, linking it statically and using compiler options like gcc's -ffunction-sections -fdata-sections and the linker option --gc-sections in an attempt to drop code from the library you don't use, this might help to reduce size.

Hardware accelerated cryptography -- fasted access from userspace? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
So I have an embedded (Linux) system with a crypto co-processor and two userspace applications which need to use it: SSL (httpd) and proprietary code; maximizing speed and efficiency is the main requirement. I spent they day examining the kernel hooks and part registers and have come to three possible solutions:
1) Access the co-processor directly since it's memory mapped;
2) Use the /dev/crypto library
3) Use OpenSSL calls for my proprietary application
During standard operation, SSL is used very rarely and the proprietary application produces a very heavy load of plaintext needing crypto. Here's the pros and cons for each option as I see them and how I got to this quandry:
1) Direct access
--Pros: probably the fastest method, closest to complete control of the crypto co-processor, least overhead, great for the proprietary app
--Cons: Race conditions or interference could occur when SSL is being used... I'm not sure how bad two userspace apps trying to asynchronously share a hardware resource could hork things up, and I may not know until a customer finds out and complains
2) /dev/crypto
--Pros: SSL already uses it, I believe it's session-based, so sharing problems would be mitigated if not avoided completely
--Cons: More overhead, lack of documentation for proper ioctl()s to configure the co-processor correctly for optimal, high duty cycle use
3) Use SSL
--Pros: already setup and working with /dev/crypto, rarely used... so it's just there and available for crypto calls, and probably the best resource sharing management
--Cons: Probably the most overhead, may not be using /dev/crypto as efficiently as it possible, things could get bursty when both the proprietary app and httpd require SSL
I'd really like to use option 1, and will code up a test framework in the morning, but I'm curious if anyone else out there has had this problem or has any opinions. Thanks!

what is difference between linux kernel subsystem dm-crypt and ecryptfs? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I was trying to read the source of ecryptfs in linux. Could anyone help me to explain the distinguish between linux kernel subsystem dm-crypt and ecryptfs. Is there any reference books that introduce source of ecryptfs. thanks for helping me .
dm-crypt and eCryptfs are both features tightly integrated inside of the Linux kernel, that encrypt data at rest. Both have been upstream in the Linux kernel since at least 2006, and are heavily used by consumers and enterprises. The approach each takes, though, is quite different.
dm-crypt provides "block" level encryption. With dm-crypt, the Linux kernel creates an entire encrypted block device, which can then be used like any other block device in the system. It can be partitioned, carved into an LVM, RAID, or used directly as a disk. This does mean, however, that you have to decide to use encryption up front, and pre-allocate the space up front, and then create and format a filesystem. It's extremely fast and efficient, especially when your CPU supports Intel's AES-NI cryptographic acceleration on the CPU. However, there is only a single key used for the entire block device. As such, it's a bit of a blunt, all-or-nothing approach to encryption.
eCryptfs provides "per-file" encryption. eCryptfs is a fully POSIX-compliant stacked filesystem for Linux. eCryptfs stores metadata in the header of each file, so that encrypted files can be copied between hosts; the file will be decrypted with the proper key in the Linux kernel keyring. There is no need to keep track of any additional information aside from what is already in the encrypted file itself. You may think of eCryptfs as a sort of "GnuPG as a filesystem". Different files can be encrypted with different keys, and filenames can optionally be encrypted. File attributes, however, are not masked, so an attacker could see the approximate size of a file, its ownerships, permissions, and timestamps. Since eCryptfs is a layered filesystem, you don't have to pre-allocate the space ahead of time. You just mount one directory on top of another (a little like NFS); all data written to and read from the upper directory (assuming you have the key) looks like plaintext data, but all of the data is encrypted before it's written to disk below as ciphertext. Since eCryptfs has to process keys and metadata on a per-file basis, it performs a little slower than dm-crypt on saturated reads and writes.
Most Linux distributions support dm-crypt to some extent in their installers, as well as Android. You can use dm-crypt to encrypt the entire device or root installation of a desktop, tablet, phone, or server, but this typically means that the system can no longer boot unattended, as you will need to interactively enter a passphrase at boot.
For this reason, Ubuntu added support for eCryptfs in its installer, enabling users to encrypt only sensitive parts of the disk, such as their home directories, and leveraging the user's login passphrase to unwrap a special, long, randomly generated key. Approximately 3 million Ubuntu users leverage eCryptfs to encrypt their home directory. Some commercial network attached storage devices, such as Synology, use eCryptfs to encrypt the data at rest. And every Google Chromebook device uses eCryptfs to secure and encrypt the user's local cache and credentials at rest.
Full disclosure: I am one of the authors and maintainers of eCryptfs.

how to sandbox and analyse traffic by firewall [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I read Palo Alto Wildfire product. There its said:
WildFire, which provides the ability to identify malicious behaviors
in executable files by running them in a virtual environment and
observing their behaviors
I didn't how i can programmatically analyse this malware behavior.
[update] My confusion is how can firewall analyse a live traffic by putting it in virtual encironment and executing it! say if some is exploting pdf vulnerability. How can a firewall programmatically analyse that?
To understand these kind of products you need to first recognize the behavior of malwares and other similar software. Usually they claim to be something else and upon execution start performing tasks not matching the standard behavior of similar applications.
Modern firewall products have code which tracks the activities performed by a downloaded executable.
For example your firewall detected a session of say an application which copies an executable on your system which claims to be a media player. They attempt to detect complete L7 i.e. identifying which application is being used and which file it copied. Then they run the received file on test machines.
The firewall also monitors the virtual machine for abnormal behaviors. Such as the received player trying to copy a lot of files on its own, or read other information from disk etc. or starts writing to the file system of machine, or starts openings sockets to send data back some where. None of this is expected to be done by a standard program of that type. This level of products have a generic programmed frame work which defines types of actions valid for a list of received application types. If they perform a behavior beyond that list, it is termed suspicious.
Details of these are in the domain of Intrusion detection (IDS/IPS).
To sum up the key here is not merely dissecting real time traffic. But also upon completion of a session, monitoring the activities performed by a downloaded program.
Lastly, once an identified application is flagged malicious, manual as well as automated mechanisms are used to be able to identify traffic of that kind. This is where signatures, connection patterns, payload lengths and other factors come in place. Snort is one example tool to define such rules and there are many others.
Once you establish a criteria such as this media player looking malicious software actually has a pattern y in 90% of the cases, they start blocking traffic right away for that particular session
The company probably won't tell you how they do it but a naive guess is that they some how use the firewall to first send files to the Virtual machine, tests them and then sends it to the end user. So the firewall itself is most likely not programmatically analyzing anything.
update
network monitoring tools for linux

Resources