I'm having almost the identical problem as here. Unfortunately this question provides no solutions. I'm using strictly HTTPS and still have the problem. I've tried all I can think of: Session.checkAgent=false, Session.cookieTimeout=0, Security.level=low, etc. I cannot re-produce the problem in any way, however, a small portion of our customers are complaining that their session is being lost. I don't know any way to debug and/or determine how/where the session is being destroyed. I don't know what else to do, can anyone help? I'm using CakePHP 2.4.5 and can upgrade to 2.5.5, but would like to determine what the issue is so I can have peace of mind that it has been fixed.
This may help
Configure::write('Session.referer_check' ,false);
But before putting that into production, you should know how it may affect your security.
The only real way to fix this is if you can reproduce it.
Related
I have a program that can find sites that are exploitable and exploit them, I posted this on my Github and people started using it. I recently have been contacted by an owner of one of the websites (my email address is there for troubleshooting issues) who was extremely upset about their website being exploited, and found using my tool.
Is there a license, thing to say (disclaimer), or something I can do to make it so that when someone uses my tool, if they exploit something, it doesn't fall back on me?
I'm thinking that since I created the tool, it would come back to haunt me ultimately, even though I am not able to control others actions. Any help with this would be appreciated, thank you.
I think a standard MIT license covers that:
link
Read the last part.
I've been trying to work with Codename One for years, but I still find errors that prevent me from releasing my apps.
Locally I can fix errors by I overshadowing erroneous classes. This works but for some reason it doesn't work when I send my apps to the build server.
If I could overshadow faulty classes would be good in many ways:
I'd better get on with my work
I could check how my corrections work on the different platforms
I could contribute to the further development of Codename One
I suffer much from not being able to publish my apps because I see no way how I can fix basic problems.
I love iPhones and do not like the Mac. Therefore I do not own a Mac and prefer to work with Linux and use the Codename One build server.
What are the reasons for not supporting overloading classes like com.codename1.ui.Component? Can You see that it would be beneficial?
This isn't the first time people asked for that but we won't deliver it. Doing this creates huge problems:
Developers don't file issues or submit fixes instead they make local fixes
Developers break things due to complex behaviors then try to get support and blame us for the issues
We have a process of submitting patches to Codename One, patches are always accepted quickly when they are valid. If something needs fixing that's what you need to do. If you need a hack then submit a patch that defines the extension point that you need. That's why we are open source...
In the past this might have been painful as you would need to wait until we updated the servers, but since changes go in every week in recent revisions this is no longer an issue. Don't think of it as "contributing", think of it as free code reviews where the entire community pulls together to improve your work...
I wrote a pam_module whichs does a couple of things and became to huge to post any code here. It basically works similar to pam_abl but with a couple of additional features like City/Country based blocking as well as checking with a dns blacklist.
Now I want to give the user a reason why his login was not successful. Something like: login failed because your country is blocked.
I hope you get the idea. Although I did some research I did not find a possibility yet to do this in pam_auth. I hope someone can give me a hint and/or lead me in the right direction. Thanks in advance.
Edit: For anyone else with a similar problem: pam_info is what you are looking for.
Source code of pam_motd(8) or should give you some idea how to write back to the user.
Actually, there is function pam_info(3), which does exactly what you want.
I have a weird issue with my BigQuery UI (going on https://bigquery.cloud.google.com/queries/my-project-name). I don't know why, but I see no datasets for my projects, when I'm fully aware they exist. My code can still hit these datasets and their tables. There is just no way for me to see them.
In the UI itself, I can still query them if I type the whole query by hand, but being able to see my structure for my schema could be helpful.
When I check my network tab in the developer tools on chrome, I notice that I receive "Failed to load ressource: net::ERR_CACHE_MISS". I then decided to do everything I could to reset my own cache. I cleared my cookies, went incognito, I tried other browsers, even other computers. NOTHING brings back my datasets.
Anyone encountered this and has any ideas how to force my cache to hit?
I had the same problem a while back. When I got the error, I struggled with it and I ended up finding a way to reset this. Seems like it's something cached server-side that makes this incorrect cache-hit. The way to reset the server-side cache is to hit a URL with a project that doesn't exist, so something like https://bigquery.cloud.google.com/queries/bogus-nonexistant-project should reset it all
Did you recently assign a new string ID to your project that previously only had a numeric ID? If so, this is a known issue that has been reported recently, and I'm still working to resolve.
The issue is that the frontend cache gets stuck with the old numeric ID for the project and our frontend JS has a bug where it errors out instead of updating the cache to contain the new string ID. LiY's workaround of going to a bogus, uncacheable URL is the suggested workaround to unstick the cache until this bug is resolved.
(And if you didn't recently assign a new string ID to your project, then I'd love to hear more details about what might have caused this issue so it won't happen to anyone else!)
I have a large link database, that I would want to protect against others who would want to copy them. Is there anything I can do other than force people to enter a CAPTCHA before each link?
you can output the links using ROT13, and then use javascript to put them back to normal.
this way, the scrapers must support javascript in order to steal your links, which should cut down on the number of eligible scrapers
bonus points: replace ROT13 with something harder, and obfuscate your 'decode' javascript.
The javascript suggestion could work, but you would render your page inaccessible to those using assistive technologies like screen readers as well as anyone without javascript.
Another possible option would be to generate a cryptographic nonce. This technique is currently used to protect against CSRF attacks, but could also be used to ensure that the scraper would have to request a page from your site before accessing a link. This approach may not be appropriate if you support hotlinking, but if you just want to make sure that someone went to your site first, it could work.
Another somewhat ghetto option would be use referrers. These can be easily faked, but it might prevent some of the dumber scrapers. This also requires that you know where your users came from before they hit your site.
Can you let us know if you are hotlinking or if the user comes to your site before going to the protected link? We might be able to provide better advice that way.