How does logging in managed VMs work? - google-app-engine

I'm reading Google's docs on logging in managed VMs, and they're rather thin on detail, and I have more questions than answers after reading:
Files in /var/log/app_engine/custom_logs are picked up automatically it says – is this path pre-existing or do you also have to mkdir -p it?
Do I have to deal with log rotation/truncation myself?
How large can the files be?
If you write a file ending in .log.json and some bit of it is corrupt, does that break the whole file or will Google pick up the bits that can be read?
Is there a performance benefit/cost to log things this way, over using APIs?
UPDATE: I managed to have logs show up in the log viewer, but only when logging files with the .log suffix, whenever I try .log.json they are not being picked up and I can't see any errors anywhere. The JSON output seems fine, and conforms to the requirement of having one object per line. Does anyone know how to debug this?

Related

Can one ever access a file stored in Heroku's ephemeral file system via the browser?

I've been writing an app to deploy on Heroku. So far everything has been working great. However, there's one last step that I haven't been able to solve: I need to generate a CSV file on-the-fly from the DB and let the user download the file.
On my local machine I can simply write the file to a folder under the web app root, e.g. /output, and then redirect my browser to http://localhost:4000/output/1.csv to get the file.
However, the same code fails again and again on Heroku no matter how I try to modify it. It either complains about not being able to write the file or not being able to redirect the browser to the correct path.
Even if I manually use heroku run bash and create an /output folder in the project root and create a file there, when I try to direct my browser there (e.g. https://myapp.herokuapp.com/output/1.csv, it simply says "Page not found".
Is it simply impossible to perform such an action on Heroku after all? I thought since we are free to create files on the ephemeral file system we should be able to access it from the browser as well, but things seem to be more complicated than that.
I'm using Phoenix framework and I already added
plug Plug.Static,
at: "/output", from: Path.expand("output/"), gzip: false
to my endpoint.ex. Apparently it works on localhost but not on Heroku?
I'll turn to Amazon S3 if indeed what I'm trying to do is impossible. However, I want to avoid using S3 as much as possible since this should be a really simple task and I don't want to add another set of credentials/extra complexity to manage. Or is there any other way to achieve what I'm trying to do without having to write the file to the file system first and then redirecting the user to it?
I know it doesn't strictly answer your question, but if you don't mind generating the CSV every time it is requested, you can use Controller.send_download/3 and serve arbitrary payload as a download (in your case the contents of the CSV).
Naturally you could store the generated CSVs somewhere (like the database or even ets) and generate them "lazily".

Protecting the video files from access by third party apps from iOS sandbox

I have a requirement where my app records the video files and stores them in app's documents directory. I want no other app should access these files other that my app. I have set file sharing enabled to NO. But I see some apps like iExplorer can show the video files saved under my app's documents directory. Can I know how this can be avoided.
I have also heard that by mounting the iOS device disk to any unix/linux machine can list out all the contents of the app sandbox.
So I want to know how to prevent this happening.
I tried with adding NSFileProtectionComplete as a attribute when saving the file. But this didn't solve the problem. Please help me on this.
Thanks,
I realize this is a little old but in the hopes of helping the next person who stumbles upon this:
You're probably looking for an encryption solution, combined with the standard steps for hiding your app documents folder as you've mentioned. Encryption won't necessarily hide the files but it will make them unreadable.
NSFileProtectionComplete only encrypts files when the device is locked. See the App Programming Guide for iOS section Protecting Data Using On-Disk Encryption. Also, keep in mind that when testing this, you'll have to wait 10-20 seconds after the device is locked before trying to verify that the file is inaccessible. If you want the encryption to persist past that point you'll have to handle it yourself. Something along what's described in this SO post perhaps.

What can I do with generated error logs?

I'm currently working on a web application which generates daily error (and non error) logs.
The current system outputs a log per task to a text file, and outputs critical errors as well as "start" and "finish" type messages to an email account.
The current workflow is as follows: scour the email box for errors, then go and find the .txt file to look at the associated errors and find the cause.
There are around 30 txt files split across about 5 servers.
This system was set up before me, but I'm looking for any advice on how to deal with the situation.
I have control of the script forming the error logs so can do pretty much anything - but I'm lost where to start: I'd considered some kind of web facing dashboard tool, maybe output the files to RSS or something?
Are there any external or internal tools I should be using?
Of course you may use the SQL Server Reporting Services or review this comparison table, there are some packages which may support SQL Server but they may be overwhelming for your task.
It's not really clear what your problem is or what you want to do, but if I understand correctly, your biggest problem is that some messages are logged to a log file but others are sent by email. Therefore, there is no single location that has all error messages in it and that makes analysis and troubleshooting difficult.
The best solution would be to use a logging framework that supports multiple logging destinations (file, DB, email) and severities. That would allow you to specify a configuration like "all errors are logged to a text file and critical ones are also sent by email", so you can ensure that you have everything in one place for general analysis but critical errors are also handled with priority.
You didn't mention what programming language you use, but assuming it's .NET-based then log4net and Enterprise Library are two common frameworks and there are many questions about them here on SO. Googling should give you a good idea of the pros and cons for your situation. If you're using a different language then you can look for the equivalent package: log4j (Java), logging (Python) etc.

Writing log files using Java EE

I need to create application logs to capture users signing in/out and their requests, for that.
We're using Java EE, and thought that creating new log files (new txt file for each day) would be a good approach, but I see that people discourage doing that, the question is: why not do it that way, and what is the correct approach?
also - is there some way to get the application directory?
log4j is one of the popularly used logger for Java EE applications and the others are slf4j,logback
log4j has many features, one them being able to create daily log files.
and to answer your question,
creating daily log files does not cause any harm to your application.
Logging to text files and rolling them daily is quite a normal approach and discouraging it per se is not justified.
For some specific uses it may be improper, for example if you log sensitive data (passwords, card numbers, etc.). There may be also issues with some cluster configurations, but then you have to ask a more detailed question.
Log4J works fine, but once you have many different Applications logging to many different Log files in different locations, you encounter the problem of having to search many log files to find the trace of certain Transaction.
One colleague recommended GrayLog2 once, which makes the viewing of the Log Files a lot easier.
You might want to take a look at that as well, depending on how many Log files your planning to keep.
http://graylog2.org/about

Is there a known good way to keep Multiple Servers Logging for Cakephp

Cake PHP stores everything under the /app/tmp/logs folder and if you have multiple servers to see what is happening at each you have to check on each server logs folder.
Is there any solution that I can use with cakephp to centralize in one place the logging for Cakephp with the log files being saved and reset in a daily basis.
Cake allows you to set a parameter in the Controller::log() function.
http://book.cakephp.org/view/159/Using-the-log-function
Basically, when you have an error:
$this->log( 'some message describing the error', 'allserverslog' );
// second param can also be LOG_ERROR or LOG_DEBUG, 2 predefined constants that identify the default logging files
Some quick research shows that a clean method would be to redefine the TMP constant (by default define('TMP', APP.'tmp'.DS)) in /app/webroot/index.php to point the whole temp directory someplace else. This is not a good solution if the folder is supposed to be shared though, since different apps may step on each others feet with their temp files.
The only apparent way to point only the log directory someplace else seems to be to edit /cake/config/paths.php.
If your goal is only to make it easy to skim through log files of different apps quickly, you could simply put a bunch of symlinks to those logs into one directory.
Or, the other way around, you can make each /app/tmp/logs folder a symlink to some shared folder. Not sure I'd recommend that though; having different apps write to the same log may get confusing, since you may not always be sure which app a message came from.

Resources