Creating a local CDMA or GSM network? - mobile

I am developing a number of different mobile applications for a number of different mobile devices and I want to quickly test in a local development environment. I was wondering if it is possible (with some sort of hardware) to set up a local desktop CDMA / GSM base station for testing devices over a local personal cellular network. The range does not have to be very far. The alternative is purchasing a SIM card and plans for various carriers but not all carriers/network types are available in our area.
I'm sure I had seen some sort of desktop device that would let you setup local networks for development/testing purposes but can't seem to find it.
Thanks.

About the least expensive such device I know is the Agilent 8960. It's difficult to give an accurate price, since it depends on the options you choose, but you are likely to need to drop about $30,000 or more for a new one.
GSM, GPRS, EDGE, WCDMA, HSPA, CDMA 2000, 1x, EV-DO are all supported, although a box with all of those options in it will be well over the figure I quoted above!
The device has been around for a while though, so you may find something on eBay or via surplus sales and the like.
The upside is that it gives you an enormous amount of control over the cellular environment, and will let you do repeatable throughput tests (something that is really impossible on 'live' networks unless you use statistical techniques and many, many test runs) but the obvious downside is the price!

In a standard network setup you would need an antenna, a BTS, a BSC, a MSC, a GGSN and a SGSN to do data traffic, all horribly expensive and requiring expert knowledge to get the stuff up and running.
If you are interested in experiments then try OpenBSC altough it might be difficult to find BTS hardware.
If you want to buy actual products then have a look at IPaccess. They offer picocell hardware. I am not sure though if their BSC can work without an MSC and SGSN. But still expect a 5-digit price. Tecore also might be worth a visit.
Test and measurement equipment manufacturers might be an alternative as well. There you should check if you actually can branch out the data traffic into the internet or some test server if you need that.
If you want to do this for a living and not for fun, I would assume that simply buying SIMs plus data plans is the cheapest alternative.

You can roll your own cell network! with http://openbts.org/ but stills you need a development kit(Hardware) which is a little expensive. Or you can try to hack your own phone to use it as a radio which is really difficult but cheap.

The answer to this question may be of interest to you also, depending on what your application does:
Testing Mobile Sites
Essentially there are companies that offer a sort of virtual testing service, allowing you test phones with different location and operator combinations.

Related

Cross Platform, Open Source, Data Synchronization Strategy

The goal is to have occasionally connected clients belonging to one user running on an array of different platforms be able to share a common dataset. Server will be GNU/Linux. Platforms and devices initially targeted include: GNU/Linux, Android and Windows on x86_64, i686, ARMv7, ARMv8. Possible consideration given to OSX and iOS devices. The UI toolkit will likely be Qt5. The non-interface logic will be in c++. My code will be licensed under the GPL or AGPL.
Many applications like Google's Maps, for instance, require a reliable connection to work effectively. My design decision is that each client should be fully functional without a network connection, assuming that the user has not used a different device/platform to change the data. In such cases, only a synchronization with network access should be necessary.
Some applications are written with hand-rolled data persistence and synchronization layers customized for each platform or subset of platforms. While this is quite effective, it is beyond the available skill and manpower available (just me :) ).
Also, though I'm somewhat conditioned by past experience to think of things from a relational database perspective, the data I'm expecting is probably better suited to a JSON/NoSQL database. Though there is UNQLite, it has no more synchronization capabilities than SQLite. And things like MongoDB, CouchDB, etc. are, as far as I have found, solely targeting the data center or cloud at the moment. Really, the same seems to be true of SQL solutions. Are there truly cross platform databases that have native synchronization/replication capabilities?
I've spent the past weeks looking at a variety of options, but have found little truly useful. The closest I've seen to something useful is CouchBase; however, it seems to be rather unstable (from an API standpoint, not relating to crashes), has a huge set of dependencies, seems to be nothing but a SQLite wrapper on mobile platforms and has limited developer docs at the moment.
If there is an applicable library or toolkit, please point it out. If not, or if this is excessively "discussion oriented", please kindly point me to an appropriate resource to find help before closing this question.
Notes:
How would you design & implement a Cross Platform synchronization mechanism? Talks about hand rolling solutions. If this is the only way to go, I would appreciate references specifically applicable to SQLite3 since I know that it is stable and available on just about everything but the kitchen sink.
Cross platform data synchronization Does not really address anything significant. One of the main concerns I have is effectively synchronizing data while minimizing network bandwidth requirements. Even in the US, there are vast areas where high speed data is simply not available. To me, usability in the widest range of circumstances is important.
How to Synchronize Android Database with an online SQL Server? This post is somewhat helpful but I am a total, absolute novice at developing for anything but *NIX on servers and workstations. Again, if it is necessary to do SQL based persistence for this set of requirements, more pointers to guidance would be helpful.

How can I evaluate and compare the effectiveness of remote desktop protocols based on the quality of their user experience?

There are many remote desktop protocols used widely, e.g. VNC, RDP, PCoIP, RGS, etc. Taking a look at their specification, it seems that they provide different features, such as redirecting I/O, tuning display setting, etc. I've recently installed a thin client system at my company. By using RDP protocol, the users are complaining about slow update of their screens. PCoIP has also slowed down their IP telephony activities.
Now the Q is: How can I evaluate and compare the effectiveness of remote desktop protocols based on the quality of their user experience?
I'd try to steer away from peoples "gut feel" for performance and get numbers to do the talking.
I use a variety of benchmarking tools to evaluate the performance. Since it's about user experience you need to consider the type of user, Task, Knowledge, Power user etc.
I can't address what is the "best" protocol as it's really a moving target and has many factors, but our baselines are in a XenDesktop environment with ESX etc. We have images with SAS storage and EFDs for cache drives.
Now the Q is: How can I evaluate and compare the effectiveness of
remote desktop protocols based on the quality of their user
experience?
Tools I use are..
OSMark
It's a great tool I customise the tests to suit the environment eg. CPU intensive, Graphic and so on. I can then compare rendering of RichText, Web and 3D objects when I make changes to the environment and relate back to the baseline. Youcan also run the same tests on a physical machine to look at the variance.
Crystal Mark
Benchmark disk performance, good for "Internal to VDI" testing of a VDI disk\network performance.
If you have a Citrix Environment, EdgeSight for Load Testing is great as well.
While this does not address your problem in particular it might help determine your bottlenecks and create load on the system for other testing.
You may need to look into QOS for your telephony stream and seperate the traffic once your determine what is most important. Look at if you can change compression in your telephony system as well.
Hope this helps, or is useful to anyone passing through.
jezr
Here are my findings:
RAWC by VMware
LoginVSI by Login Consultants
and DeskBench
I am looking for some tips also, but came pretty much empty handed.
The best stuff I am using to benchmark the performance of RDP-like solutions is WireShark and a synthetic test.
Run WireShark, connect and do some (ideally scripted, for reproducibility and comparability) operations your users would do (go to menu, edit setting, ave a fullscreen refresh of some picture that is hard to compress and then some window maximize and minimize to see how big monotone screens are compressed and so on).
Measure the time between the click that starts the refresh and the complete refresh showing up ( when the data stops flowing)
Watch out for things like clocks, widgets showing real-time data and other stuff generating a lot of refreshes every second, that can produce some noise making it hard to evaluate the results.

Multi-Agent system application idea

I need to implement a multi-agent system for an assignment. I have been brainstorming for ideas as to what I should implement, but I did not come up with anything great yet. I do not want it to be a traffic simulation application, but I need something just as useful.
I once saw an application of multiagent systems for studying/simulating fire evacuation plans in large buildings. Imagine a large building with thousands of people; in case of fire, you want these people to follow some rules for evacuating the building. To evaluate the effectiveness of your evacuation plan rules, you may want to simulate various scenarios using a multiagent system. I think it's a useful and interesting application. If you search the Web, you will find papers and works in this area, from which you might get further inspiration.
A few come to mind:
Exploration and mapping: send a team of agents out into an environment to explore, then assimilate all of their observations into consistent maps (not an easy task!)
Elevator scheduling: how to service call requests during peak capacities considering the number and location of pending requests, car locations, and their capacities (not too far removed from traffic-light scheduling, though)
Air traffic control: consider landing priorities (i.e. fuel. number of passengers, emergency conditions,etc.), airplane position and speed, and landing conditions (ie. number of runways, etc). Then develop a set of rules so that each "agent" (i.e. airplane) assumes its place in a landing sequence. Note that this is a harder version of the flocking problem mentioned in another reply.
Not sure what you mean by "useful" but... you can always have a look at swarmbased AI (school of fish, flock of birds etc.) Each agent (boid) is very very simple in this case. Make the individual agents follow each other, stay away from a predator etc.
Its not quite multi-agent but have you considered a variation on ant colony optimisation ?
http://en.wikipedia.org/wiki/Ant_colony_optimization_algorithms

Common web problems where Neural Networks could help

I was wondering if you creative minds out there could think of some situations or applications in the web environment where Neural Networks would be suitable or an interesting spin.
Edit: Some great ideas here. I was thinking more web centric. Maybe bot detectors or AI in games.
To name a few:
Any type of recommendation system (whether it's movies, books, or targeted advertisement)
Systems where you want to adapt behaviour to user preferences (spam detection, for example)
Recognition tasks (intrusion detection)
Computer Vision oriented tasks (image classification for search engines and indexers, specific objects detection)
Natural Language Processing tasks (document/article classification, again search engines and the like)
The game located at 20q.net is one of my favorite web-based neural networks. You could adapt this idea to create a learning system that knows how to play a simple game and slowly learns how to beat humans at it. As it plays human opponents, it records data on game situations, the actions taken, and whether or not the NN won the game. Every time it plays, win or lose, it gets a little better. (Note: don't try this with too simple of a game like checkers, an overly simple game can have every possible game/combination of moves pre-computed which defeats the purpose of using the NN).
Any sort of classification system based on multiple criteria might be worth looking at. I have heard of some company developing a NN that looks at employee records and determines which ones are the least satisfied or the most likely to quit.
Neural networks are also good for doing certain types of language processing, including OCR or converting text to speech. Try creating a system that can decipher capchas, either from the graphical representation or the audio representation.
If you screen scrap or accept other sites item sales info for price comparison, NN can be used to flag possible errors in the item description for a human to then eyeball.
Often, as one example, computer hardware descriptions are wrong in what capacity, speed, features that are portrayed. Your NN will learn that generally a Video card should not contain a "Raid 10" string. If there is a trend to add Raid to GPUs then your NN will learn this over time by the eyeball-er accepting an advert to teach the NN this is now a new class of hardware.
This hardware example can be extended to other industries.
Web advertising based on consumer choice prediction
Forecasting of user's Web browsing direction in micro-scale and very short term (current session). This idea is quite similar, a generalisation, to the first one. A user browsing Web could be proposed with suggestions with other potentially interesting websites. The suggestions could be relevance-ranked according to prediction calculated in real-time during user's activity. For instance, a list of proposed links or categories or tags could be displayed in form of a cloud and font size indicates rank score. Each and every click a user makes is an input to the forecasting system, so the forecast is being constantly refined to provide user with as much accurate suggestions, in terms of match against user's interest, as possible.
Ignoring the "Common web problems" angle request but rather "interesting spin" view.
One of the many ways that a NN can be viewed/configured, is as a giant self adjusting, multi-input, multi-output kind of case flow control.
So when you want to offer match ups that are fuzzy, (not to be confused directly with fuzzy logic per se, which is another area of maths/computing) NN may offer a usable alternative.
So to save energy, you offer a lift club site, one-offs or regular trips. People enter where they are, where they want to go and at what time. Sort by city and display in browse control.
Using a NN you could, over time, offer transport owners to transport seekers by watching what owners and seekers link up. As a owner may not live in the same suburb that a seeker resides. The NN learns over time what variances in owners, seekers physical location difference appear to be acceptable. So it can then expand its search area when offering a seekers potential owners.
An idea.
Search! Recognize! Classify! Basically everything search engines do nowadays could benefit from a dose of neural networks and fuzzy logic. This applies in particular to multimedia content (e.g. content-indexing images and videos) since that's where current search technologies are lagging behind.
One thing that always amazes me is that we still don't have any pseudo-intelligent firewalling technology. Something that says "hey his range of urls is making too much requests when they are not supposed to", blocks them, and sends a report to an administrator. That could be done with a neural network.
On the nasty part of things, some virus makers could find lucrative uses to neural networks. Adaptative trojans that "recognized" credit card numbers on a hard drive (instead of looking for certain cookies) or that "learn" how to mask themselves from detectors automatically.
I've been having fun trying to implement a bot based on a neural net for the Diplomacy board game, interacting via DAIDE protocols. It turns out to be extremely tricky, so I've turned to XCS to simplify the problem.
Suppose EBay used neural nets to predict how likely a particular item was to sell; predict what the best day to list items of that type would be, suggest a starting price or "buy it now price"; or grade your description based on how likely it was to attract buyers? All of those could be useful features, if they worked well enough.
Neural net applications are great for representing discrete choices and the whole behavior of how an individual acts (or how groups of individuals act) when mucking around on the web.
Take news reading for instance:
Back in the olden days, you picked up usually one newspaper (a choice), picked a section (a choice), scanned a page and chose an article (a choice), and read the basics or the entire article (another choice).
Now you choose which news site to visit and continue as above, but now you can drop one paper, pick up another, click on ads, change sections, and keep going with few limits.
The whole use of the web and the choices people make based on their demographics, interests, experience, politics, time of day, location, etc. is a very rich area for NN application. This is especially relevant to news organizations, web page design, ad revenue, and may even be an under explored area.
Of course, it's very hard to predict what one person will do, but put 10,000 of them that are the same age, income, gender, time of day, etc. together and you might be able to predict behavior that will lead to better designs. Imagine a newspaper (or even a game) that could be scaled to people's needs based on demographics. An ad man's dream !
How about connecting users to the closest DNS, and making sure there are as few bounces as possible between the request and the destination?
Friend recommendation in social apps (Linkedin,facebook,etc)

Which resolution to target for a Mobile App?

When desinging UI for mobile apps in general which resolution could be considered safe as a general rule of thumb. My interest lies specifically in web based apps. The iPhone has a pretty high resolution for a hand held, and the Nokia E Series seem to oriented differently. Is 240×320 still considered safe?
Not enough information...
You say you're targeting a "Mobile App" but the reality is that mobile could mean anything from a cell phone with 128x128 resolution to a MID with 800x600 resolution.
There is no "safe" resolution for such a wide range, and if you're truly targeting all of them you need to design a custom interface for each major resolution. Add some scaling factors in and you might be able to cut it down to 5-8 different interface designs.
Further, the UI means "User Interface" and includes a lot more than just the resolution - you can't count on a touchscreen, full keyboard, or even software keys.
You need to either better define your target, or explain your target here so we can better help you.
Keep in mind that there are millions of phone users that don't have PDA resolutions, and you can really only count on 128x128 or better to cover the majority of technically inclined cell phone users (those that know there's a web browser in their phone, nevermind those that use it).
But if you're prepared to accept these losses, go ahead and hit for 320x240 and 240x320. That will give you most current PDA phones and up (older blackberries and palm devices had smaller square orientations). Plan on spending time later supporting lower resolution devices and above all...
Do not tie your app to a particular resolution.
Make sure your app is flexible enough that you can deploy new UI's without changing internal application logic - in other words separate the presentation from the core logic. You will find this very useful later - the mobile world changes daily. Once you gauge how your app is being used you can, for instance, easily deploy an iPhone specific version that is pixel perfect (and prettier than an upscaled 320x240) in order to engage more users. Being able to do this in a few hours (because you don't have to change the internals) is going to put you miles ahead of the competition if someone else makes a swipe at your market.
-Adam
Right now I believe it would make sense for me to target about 2 resolutions and latter learn my customers best needs through feedback?
It's a chicken and egg problem.
Ideally before you develop the product you already know what your customers use/need.
Often not even the customers know what they need until they use something (and more often than not you find out what they don't need rather than what they need).
So in this case, yes, spend a little bit of time developing a prototype app that you can send out there to a few people and get feedback. They will have better feedback because they can try it out, and you will have a springboard to start from. The ability to quickly release UI updates without changing core logic will allow you test several interfaces quickly without a huge time investment.
Further, to customers you will seem really responsive to their needs, which will be a big benefit to people who's jobs depend on reaction time.
-Adam
You mentioned Web based apps. Any particular framework you have in mind?
In many cases, WALL seems to help to large extent.
Here's one Article, Adapting to User Devices Using Mobile Web Technology exploiting WALL.

Resources