PyroCMS / Codeigniter : too many session entries in db - database

I'm using for a small website the pyrocms / codeigniter combo.
after adding some content, i checked the db and saw that:
is this a normal behaviour? multiple session_ids for one user with the same ip?
i can't imagine that this is correct.
my session config looks like:
$config['sess_cookie_name'] = 'pyrocms' . (ENVIRONMENT !== 'production' ? '_' .
ENVIRONMENT : '');
$config['sess_expiration'] = 14400;
$config['sess_expire_on_close'] = true;
$config['sess_encrypt_cookie'] = true;
$config['sess_use_database'] = true;
// don't change anything but the 'ci_sessions' part of this. The MSM depends on the 'default_' prefix
$config['sess_table_name'] = 'default_ci_sessions';
$config['sess_match_ip'] = true;
$config['sess_match_useragent'] = true;
$config['sess_time_to_update'] = 300;
i did not change on line of code affecting the session class or something like that.
the red hat rows belong to a 15min cron-job. this is fine i think.
everytime a refresh the page two or three new session_entries are added...

Yes, this is normal. The CI session class automatically generates a new ID periodically. (Every 5 minutes, by default.) This is part of the security inherent in using CI sessions instead of native PHP sessions. Garbage collection will take care of this, you do not need to do anything.
You can read more about the session id behavior in the CI manual. This is an excerpt copied from that page.
The user's unique Session ID (this is a statistically random string
with very strong entropy, hashed with MD5 for portability, and
regenerated (by default) every five minutes)
This behavior is by design. There is nothing to fix. The session class has built in garbage collection that deletes old entries as needed. I have many projects using code igniter for several years. This is what it does.
If it really bothers you, you can alter the timeout in the main CI config file. Change the line
$config['sess_time_to_update'] = 300 (the 5 minute refresh period)
to a number greater than
$config['sess_expiration'] (default 7200)
This will cause the session to timeout before it is regenerated. This is inherently less secure in theory, but unless you are transacting sensitive data, it is probably irrelevant in practice.
But again, this is by design as part of the many layers of CI sessions. These and other features are what make it better than PHP native sessions. You can turn on profiling and see that the overhead for these queries is negligible, especially in light of all the other optimizations the framework provides.

Related

CouchDB permanent authentication key

We're moving and updating our database because it's due for it, but we have an issue concerning authentication. We'd like to connect to the database only with an authentication key.
Our old CouchDB were not using any user and all the databases were public (no users permissions or anything like it). It was working but it is not what we want.
Now, with our 'new' CouchDB, we'd like to have our connections made with an authentication key only, but it looks like there's an expiration on the sessions made and we can't find the way to have a token permanent.
For the context, I'm using couchdb-python for my tools and I found some ways to start a session and get the cookies, therefore the authentication key, but either it is via couchdb-python or the web platform (Fauxton I think it's called), the expiration time is still there and after the timeout (as shown below) the session does expire.
Below is our local.ini for it.
We tried to add both required_valid_user = false and allow_persistent_cookies = true but to no avail.
[couchdb]
uuid = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[couch_peruser]
[chttpd]
port = 5984
bind_address = 192.168.140.66
require_valid_user = false
[httpd]
[couch_httpd_auth]
require_valid_user = false
secret = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
allow_persistent_cookies = true
timeout = 600
[ssl]
[vhosts]
[admins]
admin = -xxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,xx
I'm pretty sure there's something we're overlooking or that we did not understand correctly.
Is there a way to get an authentication key to be permanent?
Is there a way to get an authentication key to be permanent?
The best would just be to use password authentication. The password never changes (unless you change it, of course).
But if you insist on using a token, you can increase the session timeout to some insane value:
[couch_httpd_auth]
timeout = 99999999999999
This is untested. I don't know what the maximum value is.

Ldap query only returning 1000 users... yes I am using paging

I have a simple GetStaff function that should retrieve all users from active directory. We have over a 1000 users so the directory searcher is using paging because the default for the AD MaxPageSize is 1000.
Currently the search works 'sometimes' when I build and sends back all 1054 users, and other times it only sends back 1000. If it works once, it works all the time. If it fails once, it fails all the time. I have set everything in using statements to make sure the objects are destroyed, but it still doesn't always seem to respect the PageSize attribute. By default if the PageSize attribute is set, the searcher should use a SizeLimit of 0. I have tried leaving the size limit out, setting it to 0, and setting it to 100000 and the unstable result is the same. I have also tried lowering the PageSize to 250 and get the same unstable results. Currently I am trying changing the ldap policy on the server to have a MaxPageSize of 10000 and I am still receiving 1000 users with the search PageSize to 10000 also. Not sure what I am missing here, but any help or direction would be appreciated.
public IEnumerable<StaffInfo> GetStaff(string userId)
{
try
{
var userList = new List<StaffInfo>();
using (var directoryEntry = new DirectoryEntry("LDAP://" + _adPath + _adContainer, _quarcAdminUserName, _quarcAdminPassword))
{
using (var de = new DirectorySearcher(directoryEntry)
{
Filter = GetDirectorySearcherFilter(LdapFilterOptions.AllUsers),
PageSize = 1000,
SizeLimit = 0
})
{
foreach (SearchResult sr in de.FindAll())
{
try
{
var userObj = sr.GetDirectoryEntry();
var staffInfo = new StaffInfo(userObj);
userList.Add(staffInfo);
}
catch (Exception ex)
{
Log.Error("AD Search result loop Error", ex);
}
}
}
}
return userList;
}
catch (Exception ex)
{
Log.Error("AD get staff try Error", ex);
return Enumerable.Empty<StaffInfo>();
}
}
A friend got back to me with the below response that helped me out, so I thought I would share it and hope it helps anyone else with the same issue.
The first thing I think of is "Are you using the domain name, e.g. foo.com as the _adpath?"
If so, then I have a pretty good idea. A dns query for Foo.com will return a random list of all of up to 25 DCs in the domain. If the first DC in that random list is not responsive or firewalled off and you get that DC from DNS then you will experience the behavior you describe. Since the DNS is cached on the local machine, you will see it happen consistently one day, then not do it the next. That's infuriating behavior. :/
You can verify this with a network trace to see if this is happening.
So how do you workaround it? A couple of options.
Query DNS -> create a lists of hosts returned -> Try the first one. If it fails, Try the next one. If you hit the bottom of the list, Fail. If you do this, log each independent failure noisily so the admins don't blame you.
Even better would be to ask the AD administrators for a list of ldap servers and use that with the approach described above.
80% of administrators will tell you just to use the domain name. This is good because that deploying a new domain will "just work" with no reconfiguration required.
15% of administrators will want to specify a couple of DCs that are network closest to the application. This is good for performance, but bad if they forget about this application when the time comes for them to upgrade their domain.
The other 5% doesn't really matter. :)
The next point that I see is that you are using LDAP, not LDAPs. That is fine, but there is a risk that you will use "Basic" binds. With "Basic" binds, joe hacker can steal your account credentials using a network sniffer. There are a couple of possible workarounds.
1. There is another DirectoryEntry constructor that will let you specify "Secure" as the auth method.
2. Ask your admins if you can use LdapS. (more portable, in case you need to talk to an LDAP server other than Active Directory)
The last piece is regarding Page Size. 1,000 should be fine universally. Don't use any value > 5,000 or you can expect some fidgety behaviors. i.e. This is higher than the default limit under Windows 2003, and in Windows 2008 the pagesize is hardcoded limited to 5,000 unless it's been overridden using a rather obscure bit in AD called dsHeuristics. http://support.microsoft.com/kb/2009267
LDAP is configured, by default, to only return a maximum of 1000. You can change this setting on the domain your requesting from.

'IntegrityError: column username is not unique' while using Django User model in test

While running some tests, I started to get an IntegrityError in my setUp function. Here is my code:
def setUp(self):
self.client = Client()
self.emplUser = User.objects.create_user('employee#email.com', 'employee#email.com', 'nothing')
self.servUser1 = User.objects.create_user('thebestcompany#email.com', 'thebestcompany#email.com', 'nothing')
self.servUser2 = User.objects.create_user('theothercompany#email.com', 'theothercompany#email.com', 'nothing')
self.custUser1 = User.objects.create_user('john#email.com', 'john#email.com', 'nothing')
self.custUser2 = User.objects.create_user('marcus#email.com', 'marcus#email.com', 'nothing')
... save users here ...
Im wondering as to how this IntegrityError keeps getting raised. I delete all the users in the tearDown function and am using sqlite3 as my DB backend. I see no conflicting usernames and in production, I have no issues with using emails as usernames.
This started happening only half an hour ago, out of the blue. Has anyone run into a solution to this problem?
I'm sure you're not suffering this problem anymore since you wrote 18 months ago, but I had this problem too, and finally figured out what was happening. When using Postgres for test cases, DB changes are done in a transaction and simply rolled back, and so it is not necessary to explicitly clear tables in tearDown(), however, in SQLite, it is necessary.
Late but more appropriate answer, for the people who would land there after a google search:
When there is interaction with the database in your tests (typically, creating model instances), you should subclass your test class from django.test.TestCase, which flushes the database after each test is run.
Then you don't need to write a tedious tearDown method in all your test classes.
See https://docs.djangoproject.com/en/dev/topics/testing/overview/#writing-tests

Manage Website Configuration via CMS

I've read questions on Stack Overflow very similar to this question, but not quite the same.
Let's say that I had the following config.inc.php file included on every page of my website:
<?php
$site_name = 'Acme Inc.';
$authenticate_with_ldap = true;
$ldap_host = 'ldap.example.com';
$ldap_port = 389;
$ldap_rdn = 'ldap-user';
$ldap_password = 'ldap-pass';
$ldap_dn = 'ou=example,dc=example,dc=com';
$smtp_username = 'smtp-user';
$smtp_password = 'smtp-pass';
$recaptcha_publickey = 'my-recaptcha-publickey';
$recaptcha_privatekey = 'my-recaptcha-privatekey';
?>
Note: I have chosen to keep the website configuration in a file instead of the database because the information is used all over the website and it would be a lot more code and, I'm guessing, a lot more overhead to have to query the database for the same information all the time.
Now let's say that the website administrator is the type of person who would prefer to edit the above information using a CMS as opposed to going in and editing the file manually. My fear is that when the website administrator clicks the "Update" button and the PHP script gets to the file_put_contents function that overwrites the config.inc.php file, something could go wrong and either corrupt the file or make it unusable due to a syntax error or something.
Is this a reasonable concern? Should I tell the website administrator that he should just tough it out and edit the file manually? Should I store the information in the database instead? Or should I store the information in both places so that if the file gets messed up, it can be regenerated using the information in the database?
If you store that info in the DB as a single row of data, wouldn't it be cached anyway?

Why can't I update these custom fields in Salesforce?

Greetings,
Well I am bewildered. I have been tasked with updating a PHP script that uses the BulkAPI to upsert some data into the Opportunity entity.
This is all going well except that the Bulk API is returning this error for some clearly defined custom fields:
InvalidBatch : Field name not found : cv__Acknowledged__c
And similar.
I thought I finally found the problem when I discovered the WSDL version I was using was quite old (Partner WSDL). So I promptly regenerated the WSDL. Only problem? Enterprise, Partner, etc....all of them...do not include these fields. They're all coming from the Common Ground package and start with cv_
I even tried to find them in the object explorer in Workbench as well as the schema explorer in Force.com IDE.
So, please...lend me your experience. How can I update these values?
Thanks in advance!
Clif
Screenshots to prove I have the correct access:
EDIT -- Here is my code:
require_once 'soapclient/SforcePartnerClient.php';
require_once 'BulkApiClient.php';
$mySforceConnection = new SforcePartnerClient();
$mySoapClient = $mySforceConnection->createConnection(APP.'plugins'.DS.'salesforce_bulk_api_client'.DS.'vendors'.DS.'soapclient'.DS.'partner.wsdl.xml');
$mylogin = $mySforceConnection->login('redacted#redacted.com', 'redactedSessionredactedPassword');
$myBulkApiConnection = new BulkApiClient($mylogin->serverUrl, $mylogin->sessionId);
$job = new JobInfo();
$job->setObject('Opportunity');
$job->setOpertion('upsert');
$job->setContentType('CSV');
$job->setConcurrencyMode('Parallel');
$job->setExternalIdFieldName('Id');
$job = $myBulkApiConnection->createJob($job);
$batch = $myBulkApiConnection->createBatch($job, $insert);
$myBulkApiConnection->updateJobState($job->getId(), 'Closed');
$times = 1;
while($batch->getState() == 'Queued' || $batch->getState() == 'InProgress')
{
$batch = $myBulkApiConnection->getBatchInfo($job->getId(), $batch->getId());
sleep(pow(1.5, $times++));
}
$batchResults = $myBulkApiConnection->getBatchResults($job->getId(), $batch->getId());
echo "Number of records processed: " . $batch->getNumberRecordsProcessed() . "\n";
echo "Number of records failed: " . $batch->getNumberRecordsFailed() . "\n";
echo "stateMessage: " . $batch->getStateMessage() . "\n";
if($batch->getNumberRecordsFailed() > 0 || $batch->getNumberRecordsFailed() == $batch->getNumberRecordsProcessed())
{
echo "Failures detected. Batch results:\n".$batchResults."\nEnd batch.\n";
}
And lastly, an example of the CSV data being sent:
"Id","AccountId","Amount","CampaignId","CloseDate","Name","OwnerId","RecordTypeId","StageName","Type","cv__Acknowledged__c","cv__Payment_Type__c","ER_Acknowledgment_Type__c"
"#N/A","0018000000nH16fAAC","100.00","70180000000nktJ","2010-10-29","Gary Smith $100.00 Single Donation 10/29/2010","00580000001jWnq","01280000000F7c7AAC","Received","Individual Gift","Not Acknowledged","Credit Card","Email"
"#N/A","0018000000nH1JtAAK","30.00","70180000000nktJ","2010-12-20","Lisa Smith $30.00 Single Donation 12/20/2010","00580000001jWnq","01280000000F7c7AAC","Received","Individual Gift","Not Acknowledged","Credit Card","Email"
After 2 weeks, 4 cases, dozens of e-mails and phone calls, 3 bulletin board posts, and 1 Stackoverflow question, I finally got a solution.
The problem was quite simple in the end. (which makes all of that all the more frustrating)
As stated, the custom fields I was trying to update live in the Convio Common Ground package. Apparently our install has 2 licenses for this package. None of the licenses were assigned to my user account.
It isn't clear what is really gained/lost by not having the license other than API access. As the rest of this thread demonstrates, I was able to see and update the fields in every other way.
If you run into this, you can view the licenses on the Manage Packages page in Setup. Drill through to the package in question and it should list the users who are licensed to use it.
Thanks to SimonF's professional and timely assistance on the Developer Force bulletin boards:
http://boards.developerforce.com/t5/Perl-PHP-Python-Ruby-Development/Bulk-API-So-frustrated/m-p/232473/highlight/false#M4713
I really think this is a field level security issue. Is the field included in the opportunity layout for that user profile? Field level security picks the most restrictive option, so if you seem to have access from the setup screen but it's not included in the layout, I don't think the system will give you access.
If you're certain that your user's profile has FLS access to the fields and the assigned layouts include the fields, then I'd suggest looking into the definition of the package in question. I know the bulk API allows use of fields in managed packages normally (I've done this).
My best guess at this point is that your org has installed multiple versions of this package over time. Through component deprecation, it's possible the package author deprecated these custom fields. Take a look at two places once you've logged into salesforce:
1.) The package definition page. It should have details about what package version was used when the package was first installed and what package version you're at now.
2.) The page that has WSDL generation links. If you choose to generate the enterprise WSDL, you should be taken to a page that has dropdown elements that let you select which package version to use. Try fiddling with those to see if you can get the fields to show up.
These are just guesses. If you find more info, let me know, and I can try to provide additional guidance.

Resources