Is getting GPS data from mobile browser known to be somewhat flaky? - mobile

I currently have WORKING code where when user presses a button to share their location, the browser pops out a "Would you like to share your location?" and then the front end code handles the result. (HTML5 Geolocation API)
I have noticed that the failure rate is about 10%, by which I mean rate at which either position_unavailable triggers, or the unknown error in the code below:
$("#share-location-btn").click(shareLocation);
function shareLocation() {
$("#share-location-btn").prop("disabled", true);
$("#checking-location-msg").removeClass("hidden");
navigator.geolocation.getCurrentPosition(validateLocation, handleGeolocationError);
}
function handleGeolocationError(error) {
let errorText;
switch(error.code) {
case error.PERMISSION_DENIED:
errorText = "Location sharing was denied";
break;
case error.POSITION_UNAVAILABLE:
errorText = "Location can't be determined - contact us at XXX-XXX-XXXX";
break;
default:
errorText = "Unknown error - contact us at XXX-XXX-XXXX";
break;
}
errorUIHelper(errorText);
}
I'm trying to reduce the failure rate more. I feel like this must be possible, because services like bike sharing apps (e.g., Lime) literally require a GPS for every unlock action, and given how they do NOT have on-demand customer service for when this doesn't work, I'm assuming they have achieved a super low failure rate.
However, these services are of course apps. I understand that apps offer a lot more flexibility for geofencing services in general (e.g., location can be continually sent even when phone is locked, rather than requiring a user interaction).
That said, I kind of want to confirm, with people who are more experienced in doing geofencing on mobile browsers, NOT apps... is the mobile browser just known to be flaky? Because there are so many possible permutations of mobile device, operating system, browser, etc.?
And if this results in migration to an app, are apps also known to be generally less flaky in this regard?

Related

Are JWT-signed prices secure enough for PayPal transactions client-side?

I'm using NextJS with Firebase, and PayPal is 100x easier to implement client-side. The only worry I have is somebody potentially brute-forcing the amount before the token is sent to PayPal. If I JWT sign with a secret key, is that secure enough (within reason) to dissuade people from attempting to manipulate the prices?
I thought about writing a serverless function, but it would still have to pass the values to the client to finish the transaction (the prices are baked into a statically-generated site). I'm not sure if PayPal's IPN listener is still even a thing, or even the NVP (name-value-pairs). My options as I see them:
Verify the prices and do payment server-side (way more complex)
Sign the prices with a secret key at build time, and reference those prices when sending to PayPal.
I should also mention that I'm completely open to ideas, and in no way think that these are the 'best' as it were.
Pseudo-code:
cart = {
product: [ obj1, obj2, obj3 where obj = { price, sale_price, etc.}],
total: cart.products.length
}
create an order with PayPal, using the cart array, and mapping over values
cart.products.map( prod => { return prod.sale_price || prod.price } etc.
Someone could easily modify the object to make price '.01' instead of '99.99' (for example)

Google Smart Home Toggles Trait mysterious utterances

I'm struggling to complete the development for a SmartHome action on our security panels, involving different trait implementations (including ArmDisarm, Power, Thermostats, etc.).
One specific problem is related to Toggles Trait.
I need to accept commands to enable or disable intrusion sensor bypass/exclusion.
I've added to the SYNC response the following block, for instance, for a window sensor in the kitchen:
{
'id': '...some device id...',
'name': {'name': 'Window Sensor'},
'roomHint': 'Kitchen',
'type': 'action.devices.types.SENSOR',
'traits': 'action.devices.traits.Toggles',
'willReportState': true,
'attributes': {
'commandOnlyToggles': false,
'queryOnlyToggles': false,
'availableToggles': [
{
'name': 'bypass',
'name_values': {
{ 'name_synonym': ['bypass', 'bypassed', 'exclusion'}, 'lang': 'en'],
{ 'name_synonym': ['escluso', 'bypass', 'esclusa', 'esclusione'], 'lang': 'it'}
},
}
]
}
}
I was able to trigger the EXECUTE intent by saying
"Turn on bypass on Window Sensor" (although very unnatural).
I was able to trigger the QUERY intent by saying
"Is bypass on Window Sensor?" (even more unnatural).
These two utterances where found somewhere in a remote corner of a blog.
My problem is with Italian language (and also other western EU languages such as French/Spanish/German).
The EXECUTE Intent seems to be triggered by this utterance (I bet no Italian guy will ever say anything like that):
"Attiva escluso su Sensore Finestra"
(in this example the name provided in the SYNC request was translated from "Window Sensor" to "Sensore Finestra" when running in the context of an Italian linked account).
However I was not able to find the utterance for the QUERY request, I've tried everything that could make some sense, but the QUERY intent never gets triggered, and the assistant redirects me to a simple search on the web.
Why is there such a mistery over utterances? The sample English utterances in assistant docs are very limited, and most of the times it's difficult to guess their counterpart in specific languages; furthermore no one from AOG has ever been able to give me any piece of information on this topic.
It's been more than a year now for me, trying to create a reference guide for utterances to be included in our device user manual, but still with no luck.
Can any one of you point me to some reference?
Or is there anything wrong with my SYNC data?
You can file a bug on the public tracker and include the QUERYs you have attempted. Since the execution intents seem to work, it may just be a bug in the backend grammar that isn't triggering.

Ldap query only returning 1000 users... yes I am using paging

I have a simple GetStaff function that should retrieve all users from active directory. We have over a 1000 users so the directory searcher is using paging because the default for the AD MaxPageSize is 1000.
Currently the search works 'sometimes' when I build and sends back all 1054 users, and other times it only sends back 1000. If it works once, it works all the time. If it fails once, it fails all the time. I have set everything in using statements to make sure the objects are destroyed, but it still doesn't always seem to respect the PageSize attribute. By default if the PageSize attribute is set, the searcher should use a SizeLimit of 0. I have tried leaving the size limit out, setting it to 0, and setting it to 100000 and the unstable result is the same. I have also tried lowering the PageSize to 250 and get the same unstable results. Currently I am trying changing the ldap policy on the server to have a MaxPageSize of 10000 and I am still receiving 1000 users with the search PageSize to 10000 also. Not sure what I am missing here, but any help or direction would be appreciated.
public IEnumerable<StaffInfo> GetStaff(string userId)
{
try
{
var userList = new List<StaffInfo>();
using (var directoryEntry = new DirectoryEntry("LDAP://" + _adPath + _adContainer, _quarcAdminUserName, _quarcAdminPassword))
{
using (var de = new DirectorySearcher(directoryEntry)
{
Filter = GetDirectorySearcherFilter(LdapFilterOptions.AllUsers),
PageSize = 1000,
SizeLimit = 0
})
{
foreach (SearchResult sr in de.FindAll())
{
try
{
var userObj = sr.GetDirectoryEntry();
var staffInfo = new StaffInfo(userObj);
userList.Add(staffInfo);
}
catch (Exception ex)
{
Log.Error("AD Search result loop Error", ex);
}
}
}
}
return userList;
}
catch (Exception ex)
{
Log.Error("AD get staff try Error", ex);
return Enumerable.Empty<StaffInfo>();
}
}
A friend got back to me with the below response that helped me out, so I thought I would share it and hope it helps anyone else with the same issue.
The first thing I think of is "Are you using the domain name, e.g. foo.com as the _adpath?"
If so, then I have a pretty good idea. A dns query for Foo.com will return a random list of all of up to 25 DCs in the domain. If the first DC in that random list is not responsive or firewalled off and you get that DC from DNS then you will experience the behavior you describe. Since the DNS is cached on the local machine, you will see it happen consistently one day, then not do it the next. That's infuriating behavior. :/
You can verify this with a network trace to see if this is happening.
So how do you workaround it? A couple of options.
Query DNS -> create a lists of hosts returned -> Try the first one. If it fails, Try the next one. If you hit the bottom of the list, Fail. If you do this, log each independent failure noisily so the admins don't blame you.
Even better would be to ask the AD administrators for a list of ldap servers and use that with the approach described above.
80% of administrators will tell you just to use the domain name. This is good because that deploying a new domain will "just work" with no reconfiguration required.
15% of administrators will want to specify a couple of DCs that are network closest to the application. This is good for performance, but bad if they forget about this application when the time comes for them to upgrade their domain.
The other 5% doesn't really matter. :)
The next point that I see is that you are using LDAP, not LDAPs. That is fine, but there is a risk that you will use "Basic" binds. With "Basic" binds, joe hacker can steal your account credentials using a network sniffer. There are a couple of possible workarounds.
1. There is another DirectoryEntry constructor that will let you specify "Secure" as the auth method.
2. Ask your admins if you can use LdapS. (more portable, in case you need to talk to an LDAP server other than Active Directory)
The last piece is regarding Page Size. 1,000 should be fine universally. Don't use any value > 5,000 or you can expect some fidgety behaviors. i.e. This is higher than the default limit under Windows 2003, and in Windows 2008 the pagesize is hardcoded limited to 5,000 unless it's been overridden using a rather obscure bit in AD called dsHeuristics. http://support.microsoft.com/kb/2009267
LDAP is configured, by default, to only return a maximum of 1000. You can change this setting on the domain your requesting from.

PyroCMS / Codeigniter : too many session entries in db

I'm using for a small website the pyrocms / codeigniter combo.
after adding some content, i checked the db and saw that:
is this a normal behaviour? multiple session_ids for one user with the same ip?
i can't imagine that this is correct.
my session config looks like:
$config['sess_cookie_name'] = 'pyrocms' . (ENVIRONMENT !== 'production' ? '_' .
ENVIRONMENT : '');
$config['sess_expiration'] = 14400;
$config['sess_expire_on_close'] = true;
$config['sess_encrypt_cookie'] = true;
$config['sess_use_database'] = true;
// don't change anything but the 'ci_sessions' part of this. The MSM depends on the 'default_' prefix
$config['sess_table_name'] = 'default_ci_sessions';
$config['sess_match_ip'] = true;
$config['sess_match_useragent'] = true;
$config['sess_time_to_update'] = 300;
i did not change on line of code affecting the session class or something like that.
the red hat rows belong to a 15min cron-job. this is fine i think.
everytime a refresh the page two or three new session_entries are added...
Yes, this is normal. The CI session class automatically generates a new ID periodically. (Every 5 minutes, by default.) This is part of the security inherent in using CI sessions instead of native PHP sessions. Garbage collection will take care of this, you do not need to do anything.
You can read more about the session id behavior in the CI manual. This is an excerpt copied from that page.
The user's unique Session ID (this is a statistically random string
with very strong entropy, hashed with MD5 for portability, and
regenerated (by default) every five minutes)
This behavior is by design. There is nothing to fix. The session class has built in garbage collection that deletes old entries as needed. I have many projects using code igniter for several years. This is what it does.
If it really bothers you, you can alter the timeout in the main CI config file. Change the line
$config['sess_time_to_update'] = 300 (the 5 minute refresh period)
to a number greater than
$config['sess_expiration'] (default 7200)
This will cause the session to timeout before it is regenerated. This is inherently less secure in theory, but unless you are transacting sensitive data, it is probably irrelevant in practice.
But again, this is by design as part of the many layers of CI sessions. These and other features are what make it better than PHP native sessions. You can turn on profiling and see that the overhead for these queries is negligible, especially in light of all the other optimizations the framework provides.

ACAccount Facebook: An active access token must be used to query information about the current user

I am using iOS 6 Social framework for accessing user's Facebook data. I am trying to get likes of the current user within my app using ACAccount and SLRequest. I have a valid Facebook account reference of type ACAccount named facebook, and I'm trying to get user's likes this way:
SLRequest *req = [SLRequest requestForServiceType:SLServiceTypeFacebook requestMethod:SLRequestMethodGET URL:url parameters:nil];
req.account = facebook;
[req performRequestWithHandler:^(NSData *responseData, NSHTTPURLResponse *urlResponse, NSError *error) {
//my handler code.
}
where url is #"https://graph.facebook.com/me/likes?fields=name"; In my handler, I'm getting this response:
{
error = {
code = 2500;
message = "An active access token must be used to query information about the current user.";
type = OAuthException;
};
}
Shouldn't access tokens be handled by the framework? I've found a similar post Querying Facebook user data through new iOS6 social framework but it doesn't make sense to hard-code an access token parameter into the URL, as logically the access token/login checking should be handled automatically by the framework. In all examples that I've seen around no one plays with an access token manually:
http://damir.me/posts/facebook-authentication-in-ios-6
iOS 6 Facebook posting procedure ends up with "remote_app_id does not match stored id" error
etc.
I am using the iOS6-only approach with the built in Social framework, and I'm not using the Facebook SDK. Am I missing something?
Thanks,
Can.
You need to keep a strong reference to the ACAccountStore that the account comes from. If the store gets deallocated, it looks like it causes this problem.
Try running on an actual device instead of a simulator. This worked for me.
Ensure that your bundle id is input into your Facebook app's configuration. You might have a different bundle id for your dev/debug build.

Resources