How do I convert MSC number from HLR lookup to geolocation? - mobile

If I have the number of a phone's MSC (Mobile Switching Center) that I get from its HLR Lookup, how can I identify the location from that number? Do the cellular companies publish the locations or is there a service that does this?
Thanks!

If you mean the location of the MSC itself then there is nothing in the number that a will tell you the geographic location of the MSC (unless you have some operator lookup table which they would be very unlikely to provide).
Additionally, with the move towards IP core networks and distributed architectures for MSC's, what was once a 'MSC' may now be distributed across several locations.
This is because in the past a single entity called an MSC would both terminate the links from the radio access networks and contain all the 'brains' and signalling to decide how to route your call. With more modern architectures the 'brains' and the part that handles the actuating switching of the speech part of the call are generally on different nodes, and more often than not in different locations.
If you meant to determine the location of the phone itself from the MSC number then again that will depend on the network - modern distributed architectures mean the node indicated by the 'MSC' might handle calls from anywhere in a given region or even country. Even if it is an older network with 'monolithic' MSC's (rather than the newer distributed architecture ones), a single MSC will still likely serve too large region, relatively speaking, to be much use to you.

Related

What is the difference between p2p file system and distributed file system?

When I googled for a distributed storage tool for my app,
I found two type of technologies:
The first represent themselves as p2p file system (IPFS..) and the others as distributed files system (Ceph ..)
so what is the different between p2p systems and distributed system ?
what I believe (it can be wrong) is that p2p systems doesn't assume trust between nodes, in contrast distributed systems all nodes have to trust each others or at least trust a "master" node.
P2P is a Distributed System Architecture.
what I believe (it can be wrong) is that p2p systems doesn't assume
trust between nodes, in contrast distributed systems all nodes have to
trust each others or at least trust a "master" node.
It depends on your definition of trust. If the 'trust' means standalone computer node operation, then you are correct.
P2P involves a component called Peer. In P2P, each peer has the same power/capability with another peer in the network. One peer can work alone without another peer.
Another example of Distributed System Architecture is Client-Server Architecture.
The client has a limited capability compared to peer. The client must connect to Server to perform a specific task. The client has limited capability without a server.
Distributed files system (DFS) combine several nodes storage (can be a large number) in a way that end user this see as single storage space. There is middleware that manage with all disks space and take care of data. Now, this Distributed file system can relay on servers or can relay on simple workstations. If nodes are Workstation we are talking about P2P DF system and if there are servers then we just say distributed file systems. I have to say that even P2P file system could involve node that act as server for indexing files, maping locatione etc.P2P DFS is effected by churn nature of peers (join/leave behaviour), while server based don t have this problem.
The best approach is to analyze several P2P distributed file Systems like Freenet, CFS, Oceanstores (interesanting since it use untrusted servers that act as peers), Farsite etc. take a look here for more.
And some DFS like Cepth, Hadoop, Riak etc ... some of them you can find here
Hope this helped.

"Standard" approach to collecting data from/distributing data to multiple devices/servers?

I'll start with the scenario I am most interested in:
We have multiple devices (2 - 10) which all need to know about
a growing set of data (thousands to hundreds of thousands of small chunks,
say 100 - 1000 bytes each).
Data can be generated on any device and we
want every device to be able to get all the data (edit: ..eventually. devices are not connected and/or online all the time, but they synchronize now and then) No data needs
to be deleted or modified.
There are of course a few naive approaches to handle this, but I think
they all have some major drawbacks. Naively sending everything I
have to everyone else will lead to poor performance with lots of old data
being sent again and again. Sending an inventory first and then letting
other devices request what they are missing won't do much good for small
data. So maybe having each device remember when and who they talked to
could be a worthwhile tradeoff? As long as the number of partners
is relatively small saving the date of our last sync does not use that much
space, but it should be easy to just send what has been added since then.
But that's all just conjecture.
This could be a very broad
topic and I am also interested in the problem as a whole: (Decentralized) version control probably does something similar
to what I want, as does a piece of
software syncing photos from a users smart phone, tablet and camera to an online
storage, and so on.
Somehow they're all different though, and there are many factors like data size, bandwith, consistency requirements, processing power or how many devices have aggregated new data between syncs, to keep in mind, so what is the theory about this?
Where do I have to look to find
papers and such about what works and what doesn't, or is each case just so much
different from all the others that there are no good all round solutions?
Clarification: I'm not looking for ready made software solutions/products. It's more like the question what search algorithm to use to find paths in a graph. Computer science books will probably tell you it depends on the features of the graph (directed? weighted? hypergraph? euclidian?) or whether you will eventually need every possible path or just a few. There are different algorithms for whatever you need. I also considered posting this question on https://cs.stackexchange.com/.
In your situation, I would investigate a messaging service that implements the AMQP standard such as RabbitMQ or OpenAMQ, each time a new chunk is emitted, it should be sent to the AMQP broker which will broadcast it to all devices queues. Then the message may be pushed to the consumers or pulled from the queue.
You can also consider Kafka for data streaming from several producers to several consumers. Other possibility is ZeroMQ. It depends on your specific needs
Have you considered using Amazon Simple notification service to solve this problem?
You can create a topic for each group of device you want to keep in sync. Whenever there is an update in dataset, the device can publish to the topic which in turn will be pushed to all devices using SNS.

Traffic profiling: distinguish between streaming and downloading and other services?

I'm a Libpcap and Wireshark novice: for my school project I have to distinguish between different types of traffic (SMTP, web traffic, VoIP, online gaming, downloading, streaming, ...).
While at first I relied on port numbers (25 for SMTP, 80/443 for HTTP/HTTPS, ...), some problems came up: always more sites supports HTTPS (so, no more payload investigation) and the simple port number can't tell me important differences (port 443 may bring different types of services).
So I thought to classify traffic according to some known behaviours, for example download and streaming have different bandwidth (bitrate): the first has constant high bandwidth, the second has spikes of high bandwidth that go back to zero when you have the "piece" you need.
Because of my unfamiliarity with the topic, this is the only known behaviour I got from the Web.
Anyone can point me in the right direction?
use wireshark to partition your traffic into sessions.
for those where categorization is clear based on protocol/port, categorize (e.g. port 25 = SMTP should be a given).
for those that need further analysis, find appropriate features, such as:
average packet size,
packet size std deviation/variance,
packets per second in upstream/downstream direction,
overall amount of data,
up/downstream data amount ratio,
up/downstream packet number ratio
so much more you could think of
with the numerical values for the features from 3., build vectors, and apply all your classification knowledge: Maybe this is a case for support vector machines? Maybe you just look at the clusters you might see and come to conclusions? Maybe you just generate "known" traffic of all relevant kinds and map that into that vector space, categorizing each unknown session as the euclidean distance-"closest" known traffic type? Maybe you pre-condition your vectors by what you learn from a principal component analysis?
As you see in 4., there's a lot of tools for classification, and you will need some proficiency in classification theory to deal with your problem.

Protecting crypto keys in RAM?

is there any way to protect encryption keys that are being stored in RAM from a freezer attack? (Sticking the computer in a freezer before rebooting malicious code to access the contents of RAM)
This seems to be a legitimate issue with security in my application.
EDIT: it's also worth mentioning that I will probably be making a proof of concept OS to do this on the bare metal, so keep in mind that the fewer dependencies, the better. However, TRESOR does sound really interesting, and I might port the source code of that to my proof of concept OS if it looks manageable, but I'm open to other solutions (even ones with heavy dependencies).
You could use something like the TRESOR Linux kernel patch to keep the key inside ring 0 (the highest privilege level) CPU debug registers only, which when combined with an Intel CPU that supports the AES-NI instruction, doesn't need to result in a performance penalty (despite the need for key recalculation) compared to a generic encryption implementation.
There is no programmatical way. You can not stop an attacker from freezing your computer and removing the RAM chips for analysis.
If someone gains access to your hardware - everything you have on it is in the hands of the attacker.
Always keep in mind:
http://cdn.howtogeek.com/wp-content/uploads/2013/03/xkcd-security.png
As Sergey points out, you cannot stop someone from attacking the RAM if the hardware is in their possession. The only possible solution to defend hardware is with a tamper resistant hardware security module. There are a couple of varieties on the market: TPM chips and Smart Cards come to mind. Smart cards may work better for you because the user should remove them from the device when they walk away, and you can simply erase the keys when the card is removed.
I would do a bit more risk analysis that would help you figure out how likely a frozen RAM attack is. Which computers are most at risk of being stolen? Laptops, servers, tablets, or smart phones? What value can your attackers possibly get from a stolen computer? Are you looking to keep them from decrypting an encrypted disk image? From recovering a document that's currently loaded in RAM? From recovering a key that would lead to decrypting an entire disk? From recovering credentials that would provide insider access to your network?
If the risks are really that high but you have a business need for remote access, consider keeping the secrets only on the secured corporate servers, and allowing only browser access to them. Use two factor authentication, such as a hardware access token. Perhaps you then require the remote machines to be booted only from read-only media and read-only bookmark lists to help ensure against viruses and other browser based attacks.
If you can put a monetary value on the risk, you should be able to justify the additional infrastructure needed to defend the data.

How to protect application against duplication of a virtual machine

We are using standard items such as Hard Disk and CPU ID to lock our software licenses to physical hardware. How can we reduce the risk of customers installing onto a virtual machine and then cloning the virtual machine, bypassing our licensing?
One approach is to have a licensing server. When you enter a license code into the client (on a VM), it contacts the server and sends it its license code and other information. It contacts it repeatedly (you define the interval -- maybe once every few hours) asking 'Am I still valid"? Along with this request, it sends a unique ID. The server replies 'Yes, you are valid', and sends a new unique ID back to the client. The client sends this unique ID back with its next request to the server. The server verifies this is the same ID it sent to the client for that license, the previous request.
If the VM is duplicated, the next time it asks the server 'Am I valid?', the unique ID will be incorrect either for it, or for the other VM. Both will not continue to work.
You will need to determine what to do if the server goes down, or the network goes down, such that the client cannot communicate with the server. Do you immediately disable your software? Bad idea! Don't make your customers angry. You'll want to give them a grace period. How long should this be? A few days? Weeks?
Let's say you give them a 1-month grace period. In theory, they could clone the parent VM just after entering the license key, then restore the other VMs to this clone just before their grace period runs out, disabling network access to them. This would be a hassle for your customers though, just to have pirated additional copies of your software. You have to determine what kind of grace period won't hassle your legitimate customers, while hopefully giving you the protection you seek.
Additional protection could be achieved by verifying that the VM's clock is set correctly. This would prevent the above approach to pirating.
Another consideration is that a savvy user could write their own licensing server to communicate with the VM instances, and tell them all 'you're good' -- so encrypting the communication could help deter this. How far you want to go here really depends on how much you think pirating really might be an issue with your customers. In the end you won't be able to stop true pirates who have time on their hands, but you can keep honest users honest.
License. Tell your users, they may not run unlicensed copies.
We are actually failing to buy a license for a software at the moment, because the vendor is scared of virtual machines: The infrastructure for our department is being moved to a centralized virtualized sollution and we have to fight the vendor to be allowed to buy a license for his software!
Don't be afraid of paying users.
People too cheep to buy licenses are going to look for another sollution and will be too much hassle anyway.
(good luck telling your boss that, though...)
There is no good reason to lock to a physical machine. Last I checked computers can break down, and then the user is probably going to be inconvenienced not only by a dead computer, but by having to call you to get the software locked to a new machine. If you must do draconian license management use a (local) management server and have running copies verify that they have a license every few minutes. Just realize that whatever you do if someone really wants to use your software without paying you they will find a way.
You need something outside the computer "hardware" to authenticate against. Most companies choose hardware keys (dongles) in for software with a high cost where users will put up with it.
Other companies use online methods - if more than one user with CPUID and other hardware is concurrently using a given license, then disallow another instantiation, or close the existing instantiation.
You have to choose protection according to your needs and the consumer's willingness to jump through your anti-piracy hoops.
-Adam
There's not a lot you can do AFAIK, except require periodic online activation.
We have problems with people Norton-ghosting physical machines. Apparently HDD serial numbers are ghosted too.
If your software runs under a VM, then it will run under any number of cloned VMs. Therefore, the only option seems to prevent it running under a VM at all. Here's an article about virtual machine detection: Detect if your program is running inside a Virtual Machine and one about thwarting it.
By the way, cloning a VM is usually enough of a hassle to deter casual users from bypassing your licensing and those hell bent on cracking will probably find a way to bypass it anyway.
"Don't bother" is the short version. It's non trivial enough for your clients to do it that if they are doing that, then either they won't pay for what they use no matter what (they will not use it unless they can get it for free) or you are just flat charging to much (as in you are gouging.)
The "real" customer will generally pay for the stuff. From what I've seen, places like businesses will generally consider it not worth the effort.
I know some virtual machine software (at least VMware) have features that allow software to detect virtualization. But there is no foolproof way, it's possible to patch such features away anyway. Mysteriously changing performance (due to CPU spikes in the host) could also be used, reliability is questionable. There is a plethora of "signs of being virtualized", but they tend to be not 100% reliable.
It is a problem, and any savvy user will be able to defeat pretty much anything you do about it. Unsavvy users might get caught by behaviors like VmWare's player that changes MAC and other IDs of the virtual machine when you move it, presumably in a nod to this kind of issue.
The best solution is likely to use a license server instead, since that server will count the number of active licenses. Node locking is easier to defeat, and using a server tends also to push responsibility onto an IT department that is more sensitive to not breaking license agreements compared to individual users who just want to get their job done as quickly as possible.
But in the end, I agree that it all falls back to proper license language and having customers you trust somewhat. If you think that people are making a fool of you in this way, you should not be selling your software to them in the first place...
If your software was required to under on a VM what about this concept:
on the host machine you create a compiled program that run eg. every half hour, which reads the Hard Disk and CPU ID, and then stores that together with the current timestamp in a file together with a salted hash of all that information.
you then require that the folder with the file is shared with the VM.
in your compiled software within the VM you can then read this file and check that the timestamp is recent and the hash is valid.
Or better yet, have the host program somehow communicate with the software in the VM directly.
Couldn't this be an okay solution? Not as secure as using a hardware key (like Yubikey) but you would have to be quite tech savvy to break it...?

Resources