what is the difference between hadoop -appendToFile versus hadoop -put when used for updating stream data into hdfs continously - file

As per hadoop source code following descriptions are pulled out from the classes -
appendToFile
"Appends the contents of all the given local files to the
given dst file. The dst file will be created if it does not exist."
put
"Copy files from the local file system into fs. Copying fails if the file already exists, unless the -f flag is given.
Flags:
-p : Preserves access and modification times, ownership and the mode.
-f : Overwrites the destination if it already exists.
-l : Allow DataNode to lazily persist the file to disk. Forces
replication factor of 1. This flag will result in reduced
durability. Use with care.
-d : Skip creation of temporary file(<dst>._COPYING_)."
I am trying to update a file into hdfs regularly as it is being updated dynamically from a streaming source in my local File System.
Which one should I use out of appendToFile and put, and Why?

appendToFile modifies the existing file in HDFS, so only the new data needs to be streamed/written to the filesystem.
put rewrites the entire file, so the entire new version of the file needs to be streamed/written to the filesystem.
You should favor appendToFile if you are just appending to the file (i.e. adding logs to the end of a file). This function will be faster if that's your use case. If the file is changing more than just simple appends to the end, you should use put (slower but you won't lose data or corrupt your file).

Related

Pervasive SQL(10.3) File size exceeding 2GB resulting in a .^01 file being created

We have a database with a data file exceeding 2Gb, this resulted in a .^01 file being generated with the same file name. We now have a .DAT file and a .^01 with the same name.
I have subsequently deleted the unnecessary data (old history, no longer required) and the .DAT file is now only 372MB, but the .^01 file remains.
I would like to clone the .DAT file and save the data and reload it into the cloned (blank file. I normally use Butil (Clone, Save and Load) but am unsure what I need to do with the .^01 file as the Butil -Save FileName.^01 FileName.seq returns an error as it does not recognise the ^:
BUTIL-14: The file that caused the error is FileName.01.
BUTIL-100: MicroKernel error = 12. The MicroKernel cannot find the specified file.
I would greatly appreciate some direction/input in this regard
Thank you and kind regards,
You don't need to do anything with the .^XX file(s). They are called Extended files and are automatically handled by the PSQL engine. A BUTIL -CLONE / -COPY will read all of the data (original file and extended file(s)) and copy it to the new file.
To rebuild it, you should do something like:
BUTIL -CLONE <NEWFILE.DAT> <OLDFILE.DAT>
BUTIL -COPY <OLDFILE.DAT> <NEWFILE.DAT>
Also, if the file grows above 2GB again, the Extended File (.^01) will come back.

Does DatabaseIntegrityCheck.sql log to text files in disk?

I want to setup checkdb using DatabaseIntegrityCheck.sql from Ola-hallengren. I have passed LogToTable = 'Y'. But will it log to disk as well in text files? I did not find any parameter for that.
P.S. I know that jobs from MaintenanceSolution.sql do log to files in disk.
Script reference : DatabaseIntegrityCheck.sql
The procedure do not, byt itself log to disk. There isn't really any clean way to write to disk from inside T-SQL. Hence using an output file in the job step (like what the create job section of MaintenanceSolution does).

Date in NLog file name and limit the number of log files

I'd like to achieve the following behaviour with NLog for rolling files:
1. prevent renaming or moving the file when starting a new file, and
2. limit the total number or size of old log files to avoid capacity issues over time
The first requirement can be achieved e.g. by adding a timestamp like ${shortdate} to the file name. Example:
logs\trace2017-10-27.log <-- today's log file to write
logs\trace2017-10-26.log
logs\trace2017-10-25.log
logs\trace2017-10-24.log <-- keep only the last 2 files, so delete this one
According to other posts it is however not possible to use date in the file name and archive parameters like maxArchiveFiles together. If I use maxArchiveFiles, I have to keep the log file name constant:
logs\trace.log <-- today's log file to write
logs\archive\trace2017-10-26.log
logs\archive\trace2017-10-25.log
logs\archive\trace2017-10-24.log <-- keep only the last 2 files, so delete this one
But in this case every day on the first write it moves the yesterday's trace to archive and starts a new file.
The reason I'd like to prevent moving the trace file is because we use Splunk log monitor that is watching the files in the log folder for updates, reads the new lines and feeds to Splunk.
My concern is that if I have an event written at 23:59:59.567, the next event at 00:00:00.002 clears the previous content before the log monitor is able to read it in that fraction of a second.
To be honest I haven't tested this scenario as it would be complicated to set up as my team doesn't own Splunk, etc. - so please correct me if this cannot happen.
Note also I know that it is possible to directly feed Splunk other ways like via network connection, but the current setup for Splunk at our company is reading from log files so it would be easier that way.
Any idea how to solve this with NLog?
When using NLog 4.4 (or older) then you have to go into Halloween mode and make some trickery.
This example makes hourly log-files in the same folder, and ensure archive cleanup is performed after 840 hours (35 days):
fileName="${logDirectory}/Log.${date:format=yyyy-MM-dd-HH}.log"
archiveFileName="${logDirectory}/Log.{#}.log"
archiveDateFormat="yyyy-MM-dd-HH"
archiveNumbering="Date"
archiveEvery="Year"
maxArchiveFiles="840"
archiveFileName - Using {#} allows the archive cleanup to generate proper file wildcard.
archiveDateFormat - Must match the ${date:format=} of the fileName (So remember to correct both date-formats, if change is needed)
archiveNumbering=Date - Configures the archive cleanup to support parsing of filenames as dates.
archiveEvery=Year - Activates the archive cleanup, but also the archive file operation. Because the configured fileName automatically ensures the archive file operation, then we don't want any additional archive operations (Ex. avoiding generating extra empty files at midnight).
maxArchiveFiles - How many archive files to keep around.
With NLog 4.5 (Still in BETA), then it will be a lot easier (As one just have to specify MaxArchiveFiles). See also https://github.com/NLog/NLog/pull/1993

How to modify a single file inside a very large zip without re-writing the entire zip?

I have large zip files that contain huge files. There are "metadata" text files within the zip archives that need to be modified. However, it is not possible to extract the entire zip and re-compress it. I need to locate the target text file inside the zip, edit it, and possibly append the change to the zip file. The file name of the text file is always the same, so it can be hard-coded. Is this possible? Is there a better way?
There are two approaches. First, if you're just trying to avoid recompression of the entire zip file, you can use any existing zip utility to update a single file in the archive. This will entail effectively copying the entire archive and creating a new one with the replaced entry, then deleting the old zip file. This will not recompress the data not being replaced, so it should be relatively fast. At least, about the same time required to copy the zip archive.
If you want to avoid copying the entire zip file, then you can effectively delete the entry you want to replace by changing the name within the local and central headers in the zip file (keeping the name the same length) to a name that you won't use otherwise and that indicates that the file should be ignored. E.g. replacing the first character of the name with a tilde. Then you can append a new entry with the updated text file. This requires rewriting the central directory at the end of the zip file, which is pretty small.
(A suggestion in another answer to not refer to the unwanted entry in the central directory will not necessarily work, depending on the utility being used to read the zip file. Some utilities will read the local headers for the zip file entry information, and ignore the central directory. Other utilities will do the opposite. So the local and central entry information should be kept in sync.)
There are "metadata" text files within the zip archives that need to be modified.
However, it is not possible to extract the entire zip and re-compress it.
This is a good lesson why, when dealing with huge datasets, keeping the metadata in the same place with the data is a bad idea.
The .zip file format isn't particularly complicated, and it is definitely possible to replace something inside it. The problem is that the size of the new data might increase, and not fit anymore into the location of the old data. Thus there is no standard routine or tool to accomplish that.
If you are skilled enough, theoretically, you can create your own zip handling functions, to provide the "file replace" routine. If it is about the (smallish) metadata, you do not even need to compress them. The .zip's "central directory" is located in the end of the file, after the compressed data (the format was optimized for appending new files). General concept is: read the "central directory" into the memory, append the new modified file after the compressed data, update the central directory in memory with the new file offset of the modified file, and write the central directory back after the modified file. (The old file would be still sitting somewhere inside the .zip, but not referenced anymore by the "central directory".) All the operations would be happening at the end of the file, without touching the rest of the archive's content.
But practically speaking, I would recommend to simply keep the data and the metadata separately.

hadoop write file and put in Distributed cache

I have a requirement to create a dynamic file based on the content in hadoop job.properties and then put it in Distributed Cache.
When I create the file I see that it has been created with the path of "/tmp".
I create a symbolic name and refer to this file in the cache. Now, when I try to read the file in the Dis. cache I am not able to access it. I get th error caused by: java.io.FileNotFoundException: Requested file /tmp/myfile6425152127496245866.txt does not exist.
Can you please let me know If should I need to specify the path also while creating the file and also use that path while accessing/reading the file.
I only need the file to be available only till the job is running.
I don't really get your meaning of
I only need the file to be available only till the job is running
But, when I practice to use distributed cache , I use path like this :
final String NAME_NODE = "hdfs://sandbox.hortonworks.com:8020";
job.addCacheFile(new URI(NAME_NODE + "/user/hue/users/users.dat"));
hope this will help you .

Resources