File is not creating on heroku using cakephp - cakephp

I tried to make a file on heroku using PHP code:
$fh = fopen("../DownloadFiles/".$filename,'a');
fwrite($fh,$s);
but the file has not been created and it is not showing any error. Please help.

This should work just fine, but are you aware that if you're running multiple dynos, that file will exist only on the dyno that served that one request, and not on all the others?
Also, Dynos restart every 24 hours, and their state is reset every time you push a change to Heroku, so you cannot store persistent information on them; that's called an ephemeral filesystem.
You could, for instance, store uploaded files on Amazon S3, like described in the docs: https://devcenter.heroku.com/articles/s3-upload-php
Two remarks about your original issue:
you're probably running an old version of CakePHP which mangles all internal PHP error handling and writes it out to a log file (if you're lucky), so you can't see anything in heroku logs, and it's not possible to configure it otherwise; upgrade to a more recent version that lets you log to streams and then use php://stderr as the destination
in general, if you want to write to a file in PHP, you can just do file_put_contents($filename, $contents)...

Does the DownloadFiles folder exist in the deployment? Node fs gives error if the directory is not found. You can add a snippet to check if dir exists and if not then create. You can use fs.exists and fs.mkdir.
For more info http://nodejs.org/api/fs.html

Related

Restrict a file to being edited in gitlab (.gitlab-ci.yml)

as you know We have a file for gitlab ci configuration named '.gitlab-ci.yml'
and this file shouldn't be edited by any developers so I decided to avoid developers to edit it.
the thing is gitlab said you can lock file to being edited but the prerequirement of this action is to have a premium account.
what can I do when I haven't premium account?
do you have any idea to lock a file to being edited?
Check if you have access to a Push Rule feature, which is a kind of pre-receive hook.
Or you can set a pre-receive hook if your GitLab server is on-premise.
In both cases, you can list the files being pushed in that hook, and fails if one of them is .gitlab-ci.yml.
As of today, the official way (~workaround~) for this seems to be creating a different repository for the .yml file with more restrict permissions and then referencing that .yml file from your project:
A .gitlab-ci.yml may contain rules to deploy an application to the production server. This deployment usually runs automatically after pushing a merge request. To prevent developers from changing the .gitlab-ci.yml, you can define it in a different repository. The configuration can reference a file in another project with a completely different set of permissions (similar to separating a project for deployments). In this scenario, the .gitlab-ci.yml is publicly accessible, but can only be edited by users with appropriate permissions in the other project.
https://docs.gitlab.com/ee/ci/environments/deployment_safety.html#protect-gitlab-ciyml-from-change
Also, there is a discussion on this matter here:
https://gitlab.com/gitlab-org/gitlab/-/issues/15632

weblogic domain server.out log file manually modified locked

I have a 2 questions regarding Weblogic.
I am using Weblogic 10.3.6.
Yesterday I deployed the war file.
Following are my 2 questions
1) When I restart the server the sys out logs at location
domains//servers//logs/Server-name.out
domains//servers//logs/Server-name.log
are not getting updated
Actually logs were getting updated initially but I cleared the log file by manually opening the log file and deleting its content.
Later I found on official oracle website that
"Oracle recommends that you do not modify log files by editing them manually. Modifying a file changes the timestamp and can confuse log file rotation. In addition, editing a file might lock it and prevent updates from WebLogic Server, as well as interfere with the Accessor"
I think my log files got locked due to above reason.
Is there any way I can do to get updates in log files.
I have restarted the server as well but the logs are not getting updated.
2) I have deployed my web application using packed war file. When I deploy using war file it is expected that the war file gets exploded at some temporarily location in weblogic server. War gets deployed successfully but when I checked the contents of
WEBLOGIC/bea10.3.6.0_BI/user_projects/domains/Managedserver_7011_7012/servers/Server-chanakya/tmp/_WL_user
It is blank.
I was expecting that war should get exploaded inside the _WL_user folder. But it is not happeining right now.
Please let me know what I can do with respect to above problems.
Thankx in advance.
First question:
Generally speaking the .out file is created during server start and not updated once the server reaches the RUNNING state. The .log file should be updated continually however. It is safe to delete both of these files and once the server is restarted they should be regenerated. If for some reason they are not, go to server name -> Logging tab -> Log file name and specify the full path and name for a new log file.
Second question:
If you chose nostage for your deployment, it will not be copied to your server and will live wherever the file originally was. stage mode should copy the file to tmp/_WL_user after starting out under a stage directory. You can remove your deployment from the weblogic admin console and also delete the tmp and cache folders and try the deployment again if you need to. It's also possible the deployment failed... check the Deployments link in the admin console to make sure it reached the Active state.
Last - welcome to Stack Overflow. In general you should ask one question at a time.

magento upgrade from 1.6.2 to 1.7.0.2 - Will the db base be changed?

i am looking into upgrading my magento community from 1.6.2 to 1.7.0.2.
First i will do this on my test server, but there are some errors during updating in magento connect, so i have to upload some files my self ...
but when i going to put these data into the live environment, can i just simply copy my data from the ftp to the live website?
Or are there also some new/changed settings in the database?
And if yes on the last question, which lines are changed?
I was able to successfully upgrade Magento from 1.6.1 to 1.7.using Connect Manager.
Here are the steps I had foung and followed
Go to yourdomain. com/magento/downloader/ (of course, make this
match your installation’s path.
Because I had installed Magento using tar.gz package provided with a skin I wanted to use, Magento Connect didn’t have all the
extensions listed for upgrading. I had to type
“connect20.magentocommerce.com/community/Mage_All_Latest” in the
“Install New Extensions / Paste extension key to install”
If you run into an error along the lines of “CONNECT ERROR: Package ‘Mage_All_Latest’ is invalid” repeated several times, once
for each package, it is because the files already exist, and you
have to remove a line of code in order for it to over-write data.
After everything has updated, you will probably have some errors. Make sure you clean the cache and session directories (delete
everything in /var/cache and /var/session)
If you receive a “500 Internal Server Error” it is more than likely because of file and folder permissions. It took .5 sec to
reset all of the permissions to what they needed to be.
If you receive a “Service Temporarily Unavailable The server is temporarily unable to service your request” error on a Magento
formatted header, it is probably because the store is set to offline
mode to prevent visitors from screwing up the installation process.
To fix this, delete the“maintenance.flag” file found in the root of
your magento installation directory.
Everything should be ready!
Avoid upload core library changes via ftp.
The fastest and more secure way is to patch your application using the diff files
patch -p0 -f < 1.6.2.0-1.7.0.0.diff
Then when you first visit your site Magento will automatically upgrade your db
best way to update is get a fresh mage zip 1.7.1 or whatever, and connect it to your current DB. When you go to index - the new install will update your DB to the latest MAge DB. This way you dont have to use connect etc. The mage zip has its own sql updates.
Make sure you put your current theme into the new install etc, and test it 1st on localhost etc.

app engine upload mechanisme for changed data, does it upload whole or delta?

I upload my_app to app engine app by:
appcfg.py update c:\my_app ...
If I already uploaded for my_app then done a minor changement in file,
Does it upload whole project to app engine and overwrite whole previous project?
Or does it upload only relevent change and overwrite relevent part?
And what is the case for the issue for this command:
bulkloader.py --restore --filename=my_kind.dump ...
Did you try it?
update uploads the whole application each time. There's no concept of a delta. Normally, when you upload a new version I would suggest changing the version setting - that way you can keep up to 10 previous versions of your app on the site, and only set the new one to be the default once you are sure it is working.
If you upload without changing the version, AppEngine actually creates a new version before deleting the old one, so you need a spare slot in your versions list.
I don't understand your question about the bulkloader. Are you asking if that does a delta? No, it can't, because it sends the data serially via the remote API - there's no way for it to know in advance which rows in your data file already exist in the datastore.

App Engine: Development datastore cleared each time I turn off my computer. How to avoid this?

I've been using App Engine with Python for a few months. Now that my application has a fair amount of code, I'm trying to solve a problem I've ignored so far:
Each time I turn off my computer, all my development datastore entities are removed.
I would like to keep this data until the next time I launch my development server. But I would also like to be able to turn off my computer without losing all of this data.
How should I proceed?
Thanks a lot
======== UPDATE ==========
When I set the datastore_path flag as explained by #moishe, my development server crashes as soon as it must write into the datastore.
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/datastore_file_stub.py", line 557, in __WritePickled
os.rename(tmp_filename, filename)
OSError: [Errno 13] Permission denied
Therefore, I gave this folder all UNIX permissions
chmod a+w /my_app_folder
But I have now another error which is
OSError: [Errno 21] Is a directory
Obviously the path should not be a directory. So I changed the path to:
/my_app_folder/data.datastore
And now it works! PFF...
Maybe the default data store path is in a /tmp directory that's being deleted on shutdown? You can manually set the path with the --datastore_path flag in dev_appserver.py. See the docs for details.
This clearing should not be the default behavior.
Check that this application in the Google AppEngine launcher doesn't have the --clear_datastore flag.
Select app in list and select Edit->Applications Settings...
Extra Command Line Flags should be empty.
I once set this to restart some tests and forgot to remove it.
Remove the existing application in the launcher and Create New Application. See if that helps.
Verify the OS isn't deleted the file. If you open the log for the app, then launch it, the output says where the sqlite file is being located (e.g. T:\temp\dev_appserver.rdbms)
flag when starting the dev server:
--storage_path=...
Path at which all local files (such as the Datastore, Blobstore files,
Google Cloud Storage Files, logs, etc) will be stored, unless
overridden by --datastore_path, --blobstore_path, --logs_path, etc.
found at https://developers.google.com/appengine/docs/python/tools/devserver?csw=1
I had the same problem, and installing the latest gae SDK solved it.
As in the case here: app engine datastore auto-clears every time project runs

Resources