I have 2+ clients + 1 server and I'm able to keep all data synced between clients and server as long as they're all connected. but problem is when a client (laptop) is not online and gets online after a while, in this situation I need to make sure only latest data is synced across databases but now what happens is last connected client's data gets synced to other clients/server even if it's not latest changes and there are newer changes on server/other clients.
I appreciate if you can help me solve this.
Finally I find the answer
I added a load filter record and used following bsh script to filter_on_update column to avoid changes with older modified date value
import java.text.SimpleDateFormat;
SimpleDateFormat format = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.S", Locale.ENGLISH);
if (format.parse(MODIFIED).after(format.parse(OLD_MODIFIED))) {
return true;
} else {
return false;
}
Related
I have been asked to report an issue with connecting to Snowflake using the node connector here.
Issue: https://github.com/snowflakedb/snowflake-connector-nodejs/issues/113
The issue is I can't find any documentation on how to re-use an existing token to avoid taking a long time when connecting to Snowflake.
Would appreciate any help.
EDIT
Here is the code I use:
// Tokens are retrieved from a DB
if (tokens) {
connection.masterToken = tokens.masterToken;
connection.masterTokenExpirationTime = tokens.masterTokenExpirationTime;
connection.sessionToken = tokens.sessionToken;
connection.sessionTokenExpirationTime = tokens.sessionTokenExpirationTime;
}
connection.connect(async function (err, conn) {
if (err) {
reject(err);
} else {
resolve();
}
});
This might not be a full answer, but hopefully it helps you or someone else. I've had similar issues. For us the process is to get a JWT token via a web service. I haven't tested this, but suspect this could be re-used.The JSON response includes a "lease_duration" property. I'm guessing this is in seconds, but do not know though I tried to check. To give you an idea, one value I got for this is 2764800. You could calculate the do something like:
Long leaseDurationInMs = Long.parseLong(result.get("lease_duration"));
Date estimatedLeaseExpiration = new Date(leaseStartTime+leaseDurationInMs);
System.out.println("Estimated lease expiration timestamp (human readable): "+estimatedLeaseExpiration);
Long estimatedLeaseExpirationInMs = estimatedLeaseExpiration.getTime();
and if then check this value each time you would have fetched whatever this token thing is to see if you need to get another one.
Sorry for answering my own question but I ended up caching the data on my side to avoid connecting too often.
I have a simple GetStaff function that should retrieve all users from active directory. We have over a 1000 users so the directory searcher is using paging because the default for the AD MaxPageSize is 1000.
Currently the search works 'sometimes' when I build and sends back all 1054 users, and other times it only sends back 1000. If it works once, it works all the time. If it fails once, it fails all the time. I have set everything in using statements to make sure the objects are destroyed, but it still doesn't always seem to respect the PageSize attribute. By default if the PageSize attribute is set, the searcher should use a SizeLimit of 0. I have tried leaving the size limit out, setting it to 0, and setting it to 100000 and the unstable result is the same. I have also tried lowering the PageSize to 250 and get the same unstable results. Currently I am trying changing the ldap policy on the server to have a MaxPageSize of 10000 and I am still receiving 1000 users with the search PageSize to 10000 also. Not sure what I am missing here, but any help or direction would be appreciated.
public IEnumerable<StaffInfo> GetStaff(string userId)
{
try
{
var userList = new List<StaffInfo>();
using (var directoryEntry = new DirectoryEntry("LDAP://" + _adPath + _adContainer, _quarcAdminUserName, _quarcAdminPassword))
{
using (var de = new DirectorySearcher(directoryEntry)
{
Filter = GetDirectorySearcherFilter(LdapFilterOptions.AllUsers),
PageSize = 1000,
SizeLimit = 0
})
{
foreach (SearchResult sr in de.FindAll())
{
try
{
var userObj = sr.GetDirectoryEntry();
var staffInfo = new StaffInfo(userObj);
userList.Add(staffInfo);
}
catch (Exception ex)
{
Log.Error("AD Search result loop Error", ex);
}
}
}
}
return userList;
}
catch (Exception ex)
{
Log.Error("AD get staff try Error", ex);
return Enumerable.Empty<StaffInfo>();
}
}
A friend got back to me with the below response that helped me out, so I thought I would share it and hope it helps anyone else with the same issue.
The first thing I think of is "Are you using the domain name, e.g. foo.com as the _adpath?"
If so, then I have a pretty good idea. A dns query for Foo.com will return a random list of all of up to 25 DCs in the domain. If the first DC in that random list is not responsive or firewalled off and you get that DC from DNS then you will experience the behavior you describe. Since the DNS is cached on the local machine, you will see it happen consistently one day, then not do it the next. That's infuriating behavior. :/
You can verify this with a network trace to see if this is happening.
So how do you workaround it? A couple of options.
Query DNS -> create a lists of hosts returned -> Try the first one. If it fails, Try the next one. If you hit the bottom of the list, Fail. If you do this, log each independent failure noisily so the admins don't blame you.
Even better would be to ask the AD administrators for a list of ldap servers and use that with the approach described above.
80% of administrators will tell you just to use the domain name. This is good because that deploying a new domain will "just work" with no reconfiguration required.
15% of administrators will want to specify a couple of DCs that are network closest to the application. This is good for performance, but bad if they forget about this application when the time comes for them to upgrade their domain.
The other 5% doesn't really matter. :)
The next point that I see is that you are using LDAP, not LDAPs. That is fine, but there is a risk that you will use "Basic" binds. With "Basic" binds, joe hacker can steal your account credentials using a network sniffer. There are a couple of possible workarounds.
1. There is another DirectoryEntry constructor that will let you specify "Secure" as the auth method.
2. Ask your admins if you can use LdapS. (more portable, in case you need to talk to an LDAP server other than Active Directory)
The last piece is regarding Page Size. 1,000 should be fine universally. Don't use any value > 5,000 or you can expect some fidgety behaviors. i.e. This is higher than the default limit under Windows 2003, and in Windows 2008 the pagesize is hardcoded limited to 5,000 unless it's been overridden using a rather obscure bit in AD called dsHeuristics. http://support.microsoft.com/kb/2009267
LDAP is configured, by default, to only return a maximum of 1000. You can change this setting on the domain your requesting from.
I'm using for a small website the pyrocms / codeigniter combo.
after adding some content, i checked the db and saw that:
is this a normal behaviour? multiple session_ids for one user with the same ip?
i can't imagine that this is correct.
my session config looks like:
$config['sess_cookie_name'] = 'pyrocms' . (ENVIRONMENT !== 'production' ? '_' .
ENVIRONMENT : '');
$config['sess_expiration'] = 14400;
$config['sess_expire_on_close'] = true;
$config['sess_encrypt_cookie'] = true;
$config['sess_use_database'] = true;
// don't change anything but the 'ci_sessions' part of this. The MSM depends on the 'default_' prefix
$config['sess_table_name'] = 'default_ci_sessions';
$config['sess_match_ip'] = true;
$config['sess_match_useragent'] = true;
$config['sess_time_to_update'] = 300;
i did not change on line of code affecting the session class or something like that.
the red hat rows belong to a 15min cron-job. this is fine i think.
everytime a refresh the page two or three new session_entries are added...
Yes, this is normal. The CI session class automatically generates a new ID periodically. (Every 5 minutes, by default.) This is part of the security inherent in using CI sessions instead of native PHP sessions. Garbage collection will take care of this, you do not need to do anything.
You can read more about the session id behavior in the CI manual. This is an excerpt copied from that page.
The user's unique Session ID (this is a statistically random string
with very strong entropy, hashed with MD5 for portability, and
regenerated (by default) every five minutes)
This behavior is by design. There is nothing to fix. The session class has built in garbage collection that deletes old entries as needed. I have many projects using code igniter for several years. This is what it does.
If it really bothers you, you can alter the timeout in the main CI config file. Change the line
$config['sess_time_to_update'] = 300 (the 5 minute refresh period)
to a number greater than
$config['sess_expiration'] (default 7200)
This will cause the session to timeout before it is regenerated. This is inherently less secure in theory, but unless you are transacting sensitive data, it is probably irrelevant in practice.
But again, this is by design as part of the many layers of CI sessions. These and other features are what make it better than PHP native sessions. You can turn on profiling and see that the overhead for these queries is negligible, especially in light of all the other optimizations the framework provides.
I need to update info of an existing user in my database programmaticaly
I need to update user name birth date values in user_ table in Liferay database
basically I need to run an update query.
It is not recommended to update the liferay database directly, you should use Liferay API instead to do these things. As per this liferay forum post:
The Liferay database is not published for a reason. The reason is the API does significantly more stuff than just simple SQL insert statements. There are internally managed foreign keys, there are things which are updated not just in the database but also in the indices, in jackrabbit, etc.
Since all of this is managed by the code and not by the database, any updates to the code will change how and when the database is updated. Even if it did work for you in a 6.1 GA1 version, GA2 is coming out in a couple of weeks and the database/code may change again.
Sticking with the API is the only way to insure the changes are done correctly.
Ok enough preaching and back to your problem, here are some ways you can do these:
you can either build a custom portlet and use liferay's services and update the username, birthdate etc using UserLocalServiceUtil.updateUser() method.
Or you can build a web-service client based on SOAP or JSON to update the details which would call the same method
Or you can use Liferay's Beanshell tool to do this from the control panel, following is some code to update the user (created just for you ASAP):
import com.liferay.portal.model.Company;
import com.liferay.portal.model.Contact;
import com.liferay.portal.model.ContactConstants;
import com.liferay.portal.model.User;
import com.liferay.portal.service.CompanyLocalServiceUtil;
import com.liferay.portal.service.ContactLocalServiceUtil;
import com.liferay.portal.service.UserLocalServiceUtil;
import java.util.Calendar;
import java.util.Date;
import java.util.GregorianCalendar;
long companyId = 10135; // this would be different for you
User user = UserLocalServiceUtil.getUserByEmailAddress(companyId, "test#liferay.com");
// Updating User's details
user.setEmailAddress("myTest#liferay.com");
user.setFirstName("First Test");
user.setLastName("Last Test");
user.setScreenName("myTestScreenName");
UserLocalServiceUtil.updateUser(user, false);
// Updating User's Birthday
// December 12, 1912
int birthdayMonth = 11;
int birthdayDay = 12;
int birthdayYear = 1912;
Calendar cal = new GregorianCalendar();
cal.set(birthdayYear, birthdayMonth, birthdayDay, 0, 0, 0);
cal.set(Calendar.MILLISECOND, 0);
Date birthday = cal.getTime();
System.out.println("Updated User: " + user + "\nBirthdate to be updated: " + birthday);
long contactId = user.getContactId();
Contact contact = ContactLocalServiceUtil.getContact(contactId);
if(contact == null) {
contact = ContactLocalServiceUtil.createContact(contactId);
Company company = CompanyLocalServiceUtil.getCompany(user.getCompanyId());
contact.setCompanyId(user.getCompanyId());
contact.setUserName(StringPool.BLANK);
contact.setCreateDate(new Date());
contact.setAccountId(company.getAccountId());
contact.setParentContactId(ContactConstants.DEFAULT_PARENT_CONTACT_ID);
}
contact.setModifiedDate(new Date());
contact.setBirthday(birthday);
ContactLocalServiceUtil.updateContact(contact, false);
System.out.println("Users birthdate updated successfully");
The contact code is built with the help of Liferay's source code for UserLocalServiceImpl#updateUser method
In case you are wondering what is bean-shell and where to put this code, here is where you can find it in Liferay Control Panel Control Panel --> Server --> Server Administration --> Script
It depends on whether you have to do this in a portlet code or by sending a direct query to db.
Liferay basically caches everything, so if you update a record in the Liferay database while the portal is running, most likely that record is already in cache, and so the new column values won't be read at all. You will have to clear the database cache by going to Control Panel -> Server Administration.
On the contrary, if you have to do such a thing in a portlet code, you should call one of the methods of the Liferay services. You're trying to update a User, so you should call the method UserLocalServiceUtil.updateUser (or UserServiceUtil.updateUser if you also want to check permissions).
You can see there are some different updateUser methods, one of them has a lot of parameters and another has only the bean as a parameter. While the first one contains all the business logic (validation, reindexing, update of related entities, etc.), the second one was just autogenerated and should not be used (except when you absolutely know what you're doing). So, use the method with a lot of parameters, simply passing user.getCOLUMN() (eg. user.getFacebookId()) if you don't want to change the value of that column.
Hope it helps, and sorry for my bad English...
update user_ set firstName="New First Name", lastName="New Last Name" where emailAddress="test#test.com";
update contact_ set birthday="date string" where contactId in(select contactId from user_ where emailAddress="test#test.com");
By first update query you can change firstName, lastName of user and by second query you can change birthdate of user.
Hope its clear!
Try this code..
Here i am updating only user First name(rest you can do by your own way)
userId = you can get this using theme display
User user = UserLocalServiceUtil.getUser(userId);
user.setFirstName("new name");
UserLocalServiceUtil.updateUser(user);
Hope this will help you !!!
While running some tests, I started to get an IntegrityError in my setUp function. Here is my code:
def setUp(self):
self.client = Client()
self.emplUser = User.objects.create_user('employee#email.com', 'employee#email.com', 'nothing')
self.servUser1 = User.objects.create_user('thebestcompany#email.com', 'thebestcompany#email.com', 'nothing')
self.servUser2 = User.objects.create_user('theothercompany#email.com', 'theothercompany#email.com', 'nothing')
self.custUser1 = User.objects.create_user('john#email.com', 'john#email.com', 'nothing')
self.custUser2 = User.objects.create_user('marcus#email.com', 'marcus#email.com', 'nothing')
... save users here ...
Im wondering as to how this IntegrityError keeps getting raised. I delete all the users in the tearDown function and am using sqlite3 as my DB backend. I see no conflicting usernames and in production, I have no issues with using emails as usernames.
This started happening only half an hour ago, out of the blue. Has anyone run into a solution to this problem?
I'm sure you're not suffering this problem anymore since you wrote 18 months ago, but I had this problem too, and finally figured out what was happening. When using Postgres for test cases, DB changes are done in a transaction and simply rolled back, and so it is not necessary to explicitly clear tables in tearDown(), however, in SQLite, it is necessary.
Late but more appropriate answer, for the people who would land there after a google search:
When there is interaction with the database in your tests (typically, creating model instances), you should subclass your test class from django.test.TestCase, which flushes the database after each test is run.
Then you don't need to write a tedious tearDown method in all your test classes.
See https://docs.djangoproject.com/en/dev/topics/testing/overview/#writing-tests