Protecting crypto keys in RAM? - c

is there any way to protect encryption keys that are being stored in RAM from a freezer attack? (Sticking the computer in a freezer before rebooting malicious code to access the contents of RAM)
This seems to be a legitimate issue with security in my application.
EDIT: it's also worth mentioning that I will probably be making a proof of concept OS to do this on the bare metal, so keep in mind that the fewer dependencies, the better. However, TRESOR does sound really interesting, and I might port the source code of that to my proof of concept OS if it looks manageable, but I'm open to other solutions (even ones with heavy dependencies).

You could use something like the TRESOR Linux kernel patch to keep the key inside ring 0 (the highest privilege level) CPU debug registers only, which when combined with an Intel CPU that supports the AES-NI instruction, doesn't need to result in a performance penalty (despite the need for key recalculation) compared to a generic encryption implementation.

There is no programmatical way. You can not stop an attacker from freezing your computer and removing the RAM chips for analysis.
If someone gains access to your hardware - everything you have on it is in the hands of the attacker.
Always keep in mind:
http://cdn.howtogeek.com/wp-content/uploads/2013/03/xkcd-security.png

As Sergey points out, you cannot stop someone from attacking the RAM if the hardware is in their possession. The only possible solution to defend hardware is with a tamper resistant hardware security module. There are a couple of varieties on the market: TPM chips and Smart Cards come to mind. Smart cards may work better for you because the user should remove them from the device when they walk away, and you can simply erase the keys when the card is removed.
I would do a bit more risk analysis that would help you figure out how likely a frozen RAM attack is. Which computers are most at risk of being stolen? Laptops, servers, tablets, or smart phones? What value can your attackers possibly get from a stolen computer? Are you looking to keep them from decrypting an encrypted disk image? From recovering a document that's currently loaded in RAM? From recovering a key that would lead to decrypting an entire disk? From recovering credentials that would provide insider access to your network?
If the risks are really that high but you have a business need for remote access, consider keeping the secrets only on the secured corporate servers, and allowing only browser access to them. Use two factor authentication, such as a hardware access token. Perhaps you then require the remote machines to be booted only from read-only media and read-only bookmark lists to help ensure against viruses and other browser based attacks.
If you can put a monetary value on the risk, you should be able to justify the additional infrastructure needed to defend the data.

Related

Approach to properly archiving and backing up data, preventing data loss and corruption

I'm looking for a proper way to archive and back up my data. This data consists of photos, video's, documents and more.
There are two main things I'm afraid might cause data loss or corruption, hard drive failure and bit rot.
I'm looking for a strategy that can ensure my data's safety.
I came up with the following. One hard drive which I will regularly use to store and display data. A second hard drive which will serve as an onsite backup of the first one. And a third hard drive which will serve as an offsite backup. I am however not sure if this is sufficient.
I would prefer to use regular drives, and not network attached storage, however if it's better suited I will adapt.
One of the things I read about that might help with bit rot is ZFS. ZFS does not prevent bit rot but can detect data corruption by using checksums. This would allow me to recover a corrupted file from a different drive and copy it to the corrupted one.
I need at least 2TB of storage but I'm considering 4TB to ensure potential future needs.
What would be the best way to safely store my data and prevent data loss and corruption?
For your local system plus local backup, I think a RAID configuration / ZFS makes sense because you’re just trying to handle single-disk failures / bit rot, and having a synchronous copy of the data at all times means you won’t lose the data written since your last backup was taken. With two disks ZFS can do a mirror and handles bit rot well, and if you have more disks you may consider using RAIDZ configurations since they use less storage overall to provide single-disk failure recovery. I would recommend using ZFS here over a general RAID solutions because it has a better user interface.
For your offsite backup, ZFS could make sense too. If you go that route, periodically use zfs send to copy a snapshot on the source system to the destination system. Debatably, you should use mirroring or RAIDZ on the backup system to protect against bit rot there too.
That said — there are a lot of products that will do the offsite backup for you automatically, and if you have an offsite backup, the only advantage of having an on-site backup is faster recovery if you lose your primary. Since we’re just talking about personal documents and photos, taking a little while to re-download them doesn’t seem super high stakes. If you use Dropbox / Google Drive / etc. instead, this will all be automatic and have a nice UI and support people to yell at if anything goes wrong. Also, storage at those companies will have much higher failure tolerances because they use huge numbers of disks (allowing stuff like RAIDZ with tens of parity disks and replicated across multiple geographic locations) and they have security experts to make sure that all your data is not stolen by hackers.
The only downsides are cost, and not being as intimately involved in building the system, if that part is fun for you like it is for me :).

Charts or Stats comparing Database vs. HTTP vs. Direct File Access Performance?

I am wondering what the stats are for different ways of storing (and therefore retrieving) content. Are there any charts out there, or do you guys have any quick tests to show, the requests per second, etc., of:
Direct (local) database access, vs.
HTTP Access to cached data, vs.
HTTP Access to uncached data (remote database), vs.
Direct File access
I am wondering to judge how necessary it is to locally cache data if I'm using remote services.
Thanks!
.. what the stats are ...
Although some people may have published their findings, this will not map directly to your experience - you may find the opposite of they discovered.
Sometimes it may be faster to retrieve files from a database than a file - it depends on the size of the file, the filesystem or DBMS it resides on, the other data which affects the access path (e.g. indexes, number of I/O operations to dereference the start of the file...) the underlying hardware, the amount of caching available, the presence of the data or information relating to its location in the cache and the interaction between each of these factors.
And that's before you start considering the additional variables introduced when you start talking about HTTP, which also implies remote network access.
While ultimately any file would need to be read from the filesystem at some point, this suggests that direct file access would be the fastest method (but only on the local machine) however if you consider centralized caching and concurrency this is not necessarily the case.
I am wondering to judge how necessary it is to locally cache data if I'm using remote services.
Rather hard to say. How remote? what are your bandwidth costs? Latency? What level of service do you hope to provide? Does the remote system provide caching information already? How do you deal with cache invalidations?
If we knew everything about your application, the data source, your customers and networks connecting them and your budget for implementing the service then we might hazard a guess. And, yes, caching on the MITM server probably is a good idea but only if you know that you're not breaking anything by using caching.
C.

Combining cache methods - memcache/disk based

Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us.
What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data.
Here's our two approaches that we have thought of right now:
Using memcache on >>all<< major queries and leaving it alone to do what it does best.
Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage.
The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance.
Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes.
What approach should we use?
- Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users?
Thanks a lot!
Compromize and combine this two method is a very clever way, I think.
The most obvious cache management rule is latency v.s. size rule, which is used in CPU cached also. In multi level caches each next level should have more size for compensating higher latency. We have higher latency but higher cache hit ratio. So, I didn't recommend you to place disk based cache in front of memcache. Сonversely it's should be place behind memcache. The only exception is if you cache directory mounted in memory (tmpfs). In this case file based cache could compensate high load on memcache, and also could have latency profits (because of data locality).
This two storages (file based, memcache) are not only storages that are convenient for cache. You also could use almost any KV database as they are very good at concurrency control.
Cache invalidation is separate question which can engage your attention. There are several tricks you could use to provide more subtle cache update on cache misses. One of them is dog pile effect prediction. If several concurrent threads got cache miss simultaneously all of them go to backend (database). Application should allow only one of them to proceed and rest of them should wait on cache. Second is background cache update. It's nice to update cache not in web request thread but in background. In background you can control concurrency level and update timeouts more gracefully.
Actually there is one cool method which allows you to do tag based cache tracking (memcached-tag for example). It's very simple under the hood. With every cache entry you save a vector of tags versions which it is belongs to (for example: {directory#5: 1, user#8: 2}). When you reading cache line you also read all actual vector numbers from memcached (this could be effectively performed with multiget). If at least one actual tag version is greater than tag version saved in cache line then cache is invalidated. And when you change objects (for example directory) appropriate tag version should be incremented. It's very simple and powerful method, but have it's own disadvantages, though. In this scheme you couldn't perform efficient cache invalidation. Memcached could easily drop out live entries and keep old entries.
And of course you should remember: "There are only two hard things in Computer Science: cache invalidation and naming things" - Phil Karlton.
Memcached is quite a scalable system. For instance, you can replicate cache to decrease access time for certain key buckets or implement Ketama algorithm that enables you to add/remove Memcached instances from pool without remap of all keys. In this way, you can easily add new machines dedicated to Memcached when you happen to have extra memory. Furthermore, as its instance can be run with different sizes, you can throw up one instance by adding more RAM to an old machine. Generally, this approach is more economic and to some extent does not inferior to the first one, especially for multiget() requests. Regarding a performance drop with data growth, the runtime of the algorithms used in Memcached does not vary with the size of the data, and therefore the access time depend only on number of simultaneous requests. Finally, if you want to tune your memory/performance priorities you can set expire time and available memory configuration values which will strict RAM usage or increase cache hits.
At the same time, when you use a hard-disk the file system can become a bottleneck of your application. Besides general I/O latency, such things as fragmentation and huge directories can noticeably affect your overall request speed. Also, beware that default Linux hard disk settings are tuned more for compatibility than for speed, so it is advisable to configure it properly before usage (for instance, you can try hdparm utility).
Thus, before adding one more integrating point, I think you should tune the existent system. Usually, properly designed database, configured PHP, Memcached and handling of static data should be enough even for a high-load web site.
I would suggest that you first use memcache for all major queries. Then, test to find queries that are least used or data that is rarely changed and then provide a cache for this.
If you can isolate common data from rarely used data, then you can focus on improving performance on the more commonly used data.
Memcached is something that you use when you're sure you need to. You don't worry about it being heavy on memory, because when you evaluate it, you include the cost of the dedicated boxes that you're going to deploy it on.
In most cases putting memcached on a shared machine is a waste of time, as its memory would be better used caching whatever else it does instead.
The benefit of memcached is that you can use it as a shared cache between many machines, which increases the hit rate. Moreover, you can have the cache size and performance higher than a single box can give, as you can (and normally would) deploy several boxes (per geographical location).
Also the way memcached is normally used is dependent on a low latency link from your app servers; so you wouldn't normally use the same memcached cluster in different geographical locations within your infrastructure (each DC would have its own cluster)
The process is:
Identify performance problems
Decide how much performance improvement is enough
Reproduce problems in your test lab, on production-grade hardware with necessary driver machines - this is nontrivial and you may need a lot of dedicated (even specialised) hardware to drive your app hard enough.
Test a proposed solution
If it works, release it to production, if not, try more options and start again.
You should not
Cache "everything"
Do things without measuring their actual impact.
As your performance test environment will never be perfect, you should have sufficient instrumentation / monitoring that you can measure performance and profile your app IN PRODUCTION.
This also means that every single thing that you cache should have a cache hit/miss counter on it. You can use this to determine when the cache is being wasted. If a cache has a low hit rate (< 90%, say), then it is probably not worthwhile.
It may also be worth having the individual caches switchable in production.
Remember: OPTIMISATIONS INTRODUCE FUNCTIONAL BUGS. Do as few optimisations as possible, and be sure that they are necessary AND effective.
You can delegate the combination of disk/memory cache to the OS (if your OS is smart enough).
For Solaris, you can actually even add SSD layer in the middle; this technology is called L2ARC.
I'd recommend you to read this for a start: http://blogs.oracle.com/brendan/entry/test.

Should I use a dedicated network channel between the database and the application server?

Should I use a dedicated network channel between the database and the application server?
...or...
Connecting both in the switch along with all other computer nodes makes no diference at all?
The matter is performance!
It all depends on the throughput needs of your application. If you absolutely need the lowest latency possible, then it would make sense to optimize the routes. Aside from hugely scalable software, I would argue that this is rarely needed and you can just connect everything in a generic fashion.
It depends on your non-functional requirements. Assuming the NICs are running at the same rate, keeping the database traffic away from the front-end traffic can only be a good thing from a bandwidth perspective - if bandwidth is an issue.
Far more significant is that security is improved by keeping the front-side and data-sides on different networks as the only way to gain direct access to the database is to compromise the application server.
Using the shared switch could give increased latency, especially if the switch is busy. Also, you may be able to hook up a faster dedicated network channel (e.g. gigabit ethernet, if your switch is 100Mbit). Whether any of this is worth doing or not depends on your application though.
You may also want to use a dedicated channel for increased security (making your database server less accessible).

How to protect application against duplication of a virtual machine

We are using standard items such as Hard Disk and CPU ID to lock our software licenses to physical hardware. How can we reduce the risk of customers installing onto a virtual machine and then cloning the virtual machine, bypassing our licensing?
One approach is to have a licensing server. When you enter a license code into the client (on a VM), it contacts the server and sends it its license code and other information. It contacts it repeatedly (you define the interval -- maybe once every few hours) asking 'Am I still valid"? Along with this request, it sends a unique ID. The server replies 'Yes, you are valid', and sends a new unique ID back to the client. The client sends this unique ID back with its next request to the server. The server verifies this is the same ID it sent to the client for that license, the previous request.
If the VM is duplicated, the next time it asks the server 'Am I valid?', the unique ID will be incorrect either for it, or for the other VM. Both will not continue to work.
You will need to determine what to do if the server goes down, or the network goes down, such that the client cannot communicate with the server. Do you immediately disable your software? Bad idea! Don't make your customers angry. You'll want to give them a grace period. How long should this be? A few days? Weeks?
Let's say you give them a 1-month grace period. In theory, they could clone the parent VM just after entering the license key, then restore the other VMs to this clone just before their grace period runs out, disabling network access to them. This would be a hassle for your customers though, just to have pirated additional copies of your software. You have to determine what kind of grace period won't hassle your legitimate customers, while hopefully giving you the protection you seek.
Additional protection could be achieved by verifying that the VM's clock is set correctly. This would prevent the above approach to pirating.
Another consideration is that a savvy user could write their own licensing server to communicate with the VM instances, and tell them all 'you're good' -- so encrypting the communication could help deter this. How far you want to go here really depends on how much you think pirating really might be an issue with your customers. In the end you won't be able to stop true pirates who have time on their hands, but you can keep honest users honest.
License. Tell your users, they may not run unlicensed copies.
We are actually failing to buy a license for a software at the moment, because the vendor is scared of virtual machines: The infrastructure for our department is being moved to a centralized virtualized sollution and we have to fight the vendor to be allowed to buy a license for his software!
Don't be afraid of paying users.
People too cheep to buy licenses are going to look for another sollution and will be too much hassle anyway.
(good luck telling your boss that, though...)
There is no good reason to lock to a physical machine. Last I checked computers can break down, and then the user is probably going to be inconvenienced not only by a dead computer, but by having to call you to get the software locked to a new machine. If you must do draconian license management use a (local) management server and have running copies verify that they have a license every few minutes. Just realize that whatever you do if someone really wants to use your software without paying you they will find a way.
You need something outside the computer "hardware" to authenticate against. Most companies choose hardware keys (dongles) in for software with a high cost where users will put up with it.
Other companies use online methods - if more than one user with CPUID and other hardware is concurrently using a given license, then disallow another instantiation, or close the existing instantiation.
You have to choose protection according to your needs and the consumer's willingness to jump through your anti-piracy hoops.
-Adam
There's not a lot you can do AFAIK, except require periodic online activation.
We have problems with people Norton-ghosting physical machines. Apparently HDD serial numbers are ghosted too.
If your software runs under a VM, then it will run under any number of cloned VMs. Therefore, the only option seems to prevent it running under a VM at all. Here's an article about virtual machine detection: Detect if your program is running inside a Virtual Machine and one about thwarting it.
By the way, cloning a VM is usually enough of a hassle to deter casual users from bypassing your licensing and those hell bent on cracking will probably find a way to bypass it anyway.
"Don't bother" is the short version. It's non trivial enough for your clients to do it that if they are doing that, then either they won't pay for what they use no matter what (they will not use it unless they can get it for free) or you are just flat charging to much (as in you are gouging.)
The "real" customer will generally pay for the stuff. From what I've seen, places like businesses will generally consider it not worth the effort.
I know some virtual machine software (at least VMware) have features that allow software to detect virtualization. But there is no foolproof way, it's possible to patch such features away anyway. Mysteriously changing performance (due to CPU spikes in the host) could also be used, reliability is questionable. There is a plethora of "signs of being virtualized", but they tend to be not 100% reliable.
It is a problem, and any savvy user will be able to defeat pretty much anything you do about it. Unsavvy users might get caught by behaviors like VmWare's player that changes MAC and other IDs of the virtual machine when you move it, presumably in a nod to this kind of issue.
The best solution is likely to use a license server instead, since that server will count the number of active licenses. Node locking is easier to defeat, and using a server tends also to push responsibility onto an IT department that is more sensitive to not breaking license agreements compared to individual users who just want to get their job done as quickly as possible.
But in the end, I agree that it all falls back to proper license language and having customers you trust somewhat. If you think that people are making a fool of you in this way, you should not be selling your software to them in the first place...
If your software was required to under on a VM what about this concept:
on the host machine you create a compiled program that run eg. every half hour, which reads the Hard Disk and CPU ID, and then stores that together with the current timestamp in a file together with a salted hash of all that information.
you then require that the folder with the file is shared with the VM.
in your compiled software within the VM you can then read this file and check that the timestamp is recent and the hash is valid.
Or better yet, have the host program somehow communicate with the software in the VM directly.
Couldn't this be an okay solution? Not as secure as using a hardware key (like Yubikey) but you would have to be quite tech savvy to break it...?

Resources