I've read many papers and source code about distributed system and there are many "duplica"s and "replica"s. It seems having the same meaning.
Is there any difference between these two words?
I'm having trouble finding any papers which refer to "duplicas" either in my paper archive or on Google Scholar, although I've found a bunch which say "duplicates"/"duplication". If that's what you meant, yes, they refer to the same thing.
Related
Firstly, I am not a programmer. I am an end user who has had some recent home audio experiences that confuse me that i believe are related to the random/shuffle code that the manufacturer installed.
This is an exercise in educating myself & i thought that the programmer community would be a good place to go. I have read a number of random/shuffle threads here & although i don't understand the specifics of the coding discussed, i think i have gained a general understanding of the concepts (& challenges that are faced in writing it).
The issue I've encountered relates to random playback using a 32 Gb USB thumb drive in my new Marantz 6007 single disc CD player (a feature promoted by Marantz in their documentation). Other owners have also raised the issue that i will describe below.
In short, it only plays 'random' songs from Artists whose names begin with a number and those beginning with an A & B (ie. The beginning of the alphabetical listings). This array of songs only amounts to a total of 312 files on my thumb drive. It seems inconceivable that Marantz would specifically write code for that small of an array (or that there is a coding error) especially since it's specs list a maximum permissible thumb drive file content as 65025 files.
FYI:
Skipping forward to the next song in the 'random' sequence does not resolve the issue. Removing folders & sub-folders from the thumb drive such that only song files exist on the drive does not change the results.
My experiences with this feature have been disappointing & baffling from an end user perspective. I can't imagine the the manufacturer's thought processes that would so limit the random play feature.
I look forward to reading your opinions & comments in the hopes of informing myself & preparing myself should i end up in direct coomunication with Marantz (i have not received their response to my online support submission as ofvthis writing). It's a great product save for the issue noted above.
I'm not sure if i've placed this question in the correct location. Nor do i know whether this is even a subject that you want to address in this community given the absence of a specific coding question. I guess that the moderator will decide.
Regardless, i thank you in advance for reading my overlong message & also for any insights you may provide. If you require additional information from me, i'll do my best to provide it in a timely fashion.
I am attempting to write a program that will generate .agr files that can be loaded and manipulated in xmgrace. I've dissected an example file that has the kind of formatting I'm looking for, but I'm not 100% sure what every line does. A lot of the commands are self-explanatory for the most part, but is there a guide somewhere I can use to reference some of the more obscure lines like #reference date 0, #default sformat "%.8g", #r0 off, etc.?
I've looked around the grace website in both the user and developer sections as well as googling individual lines without much luck. All I'm looking for is basically a man page of xmgrace .agr files. The more low-level details, the better.
Any help would be appreciated!
I'm sure that you have already looked through all of the official documentation for Grace/xmgrace. This documentation doesn't give much information about the internals of the .agr files that xmgrace creates.
I have found in the past that creating your own files and studying them in a text editor is a good way to learn what each line does, but as you said it is not always possible to decipher everything.
A project that is doing something similar to you is pygrace.
Maybe if you look at the pygrace source code it will give you some further clues to fill of the gaps in your existing knowledge.
Here comes a straight-forward question about random access when it comes to file systems using FAT.
I have seen different explanations of FAT with different kinds of pictures/animations showing different things. I don't understand how random access is possible without going through the file once. I thought of some kind of table that listed all the blocks that belong to a certain file, but it looks like the FAT is only mapping to the next block, meaning you still have to go through the FAT until you find the End-Of-File, then save these indexes in an array, and only then would you be able to perform random access.
My question is if what I wrote above is true. Is the whole random access only possible after first looking through the table to find all the blocks?
The File Allocation Table, FAT, used by DOS is a variation of linked allocation, where all the links are stored in a separate table at the beginning of the disk. The benefit of this approach is that the FAT table can be cached in memory, greatly improving random access speeds.
So it can be cached which makes it faster.
Ref: Abraham Silberschatz, Greg Gagne, and Peter Baer Galvin, "Operating System Concepts, Ninth Edition ", Chapter 12
I think it only reduce the cost of random access compared with normal linked access, since only it only traverse the link of each file. Thus, it says that random access can be optimised by FAT.
I have several address list's on my TBIRD address book.
every time I need to edit an address that is contained in several lists, is a pain on the neck to find which list contains the address to be modified.
As a help tool I want to read the several files and just gave the user a list of which
xxx.MAB files includes the searched address on just one search.
having the produced list, the user can simply go to edit just the right address list's.
Will like to know a minimum about the format of mentioned MAB files, so I can OPEN + SEARCH for strings into the files.
thanks in advance
juan
PD have asked mozilla forum, but there are no plans from mozilla to consolidate the address on one master file and have the different list's just containing links to the master. There is one individual thinking to do that, but he has no idea when due to lack of resources,
on this forum there is a similar question mentioning MORK files, but my actual TBIRD looks like to have all addresses contained on MAB files
I am afraid there is no answer that will give you a proper solution for this question.
MORK is a textual database containing the files Address Book Data (.mab files) and Mail Folder Summaries (.msf files).
The format, written by David McCusker, is a mix of various numerical namespaces and is undocumented and seem to no longer be developed/maintained/supported. The only way you would be able to get the grips of it is to reverse engineer it parallel with looking at source code using this format.
However, there have been experienced people trying to write parsers for this file format without any success. According to Wikipedia former Netscape engineer Jamie Zawinski had this to say about the format:
...the single most brain-damaged file format that I have ever seen in
my nineteen year career
This page states the following:
In brief, let's count its (Mork's) sins:
Two different numerical namespaces that overlap.
It can't decide what kind of character-quoting syntax to use: Backslash? Hex encoding with dollar-sign?
C++ line comments are allowed sometimes, but sometimes // is just a pair of characters in a URL.
It goes to all this serious compression effort (two different string-interning hash tables) and then writes out Unicode strings
without using UTF-8: writes out the unpacked wchar_t characters!
Worse, it hex-encodes each wchar_t with a 3-byte encoding, meaning the file size will be 3x or 6x (depending on whether whchar_t is 2
bytes or 4 bytes.)
It masquerades as a "textual" file format when in fact it's just another binary-blob file, except that it represents all its magic
numbers in ASCII. It's not human-readable, it's not hand-editable, so
the only benefit there is to the fact that it uses short lines and
doesn't use binary characters is that it makes the file bigger. Oh
wait, my mistake, that isn't actually a benefit at all."
The frustration shines through here and it is obviously not a simple task.
Consequently there apparently exist no parsers outside Mozilla products that is actually able to parse this format.
I have reversed engineered complex file formats in the past and know it can be done with the patience and right amount of energy.
Sadly, this seem to be your only option as well. A good place to start would be to take a look at Thunderbird's source code.
I know this doesn't give you a straight-up solution but I think it is the only answer to the question considering the circumstances for this format.
And of course, you can always look into the extension API to see if that allows you to access the data you need in a more structured way than handling the file format directly.
Sample code which reads mork
Node.js: https://www.npmjs.com/package/mork-parser
Perl: http://metacpan.org/pod/Mozilla::Mork
Python: https://github.com/KevinGoodsell/mork-converter
More links: https://wiki.mozilla.org/Mork
For what I know, it's best practice to name files like this: file_name.txt - or if you want to file-name.txt.
Now some people seem to like to name their files fileName.txt or FILENAME.TXT or "File Name.txt" - how do explain them that it's not a good idea? Why exactly is the aforementioned file naming best practice?
I only vaguely know some file systems have trouble with uppercase, and that URIs should be lowercase only to avoid confusion (Wikipedia does have uppercase characters in their URLs though e.g. http://en.wikipedia.org/wiki/Sinusitis )
W.
Well, a problem with uppercase letters would be that some filesystems (like NTFS) ignore them and treat filename.txt and FILENAME.TXT as the same file, whereas other filesystems (ext for example, I think) thinks of these as 2 different files.
So, if you have some reference to a file that you called file.txt, and the reference points to the file File.txt, then on NTFS this would be no problem, but if you copy the files to a file system like ext, the reference would fail because the filesystem thinks there is no such file as File.txt.
Because of this, it's best practice to always use lowercase letters.
If your colleagues are clueless, then you might be able to convince them that ALL CAPS takes more storage, uses more power, and is more likely to get deleted.
However, if they are as knowledgeable about filenames as you, there's little you can to get them to side with your preference.
In this situation, I like to take the absurdist approach, to help my colleagues want to have a reasonable approach. I suggest you start naming files with CrAzY cAsE. After a few directories of CrAzY cAsE, your ALL CAPS colleagues and your lowercase colleagues will come to you and ask you to stop. You then say, Well we should have standard naming convention, I'm impartial to the results if we can agree on a standard. Then nudge the discussion toward lower case names, and declare that as the binding compromise.
Maximilian has good point!
It's best practice to avoid the possibility of confusion (dissimilar names treated as identical) but I work in a place where various systems are used, from DOS to Windows to Unix, and I have never been able to convince those users that the CAPS LOCK should be avoided.
Since I mostly deal with Unix-like systems, I would dearly love to legislate for lower-case everywhere, but I'm beating my head against a brick wall.
Best Practice is an alien concept to most computer users.
If your colleagues are programmers you might stand a chance.
The argument that all lower case is the 'best practice' would easily vindicate using all CAPS as best practice as well.
I think it's fair to say that the vast majority of users don't operate in multi-platform environments, or at least not in a manner that's likely to cause them to encounter the issue raised here.
The issue is really only a problem when copying from a case-sensitive environment to a non-case sensitive one where you have multiple variants of a filename within a single directory (somewhat unlikely). The idea that a file reference would be the crux for me falls down when you consider that directory structure variation is likely to be an equal issue in such situations.
At the end of the day, in a corporate environment, there should be a published standard for such things that everyone is at least encouraged to follow, that for me is best practice. Those that follow the standard don't only have themselves to blame.
The POSIX standard (IEEE Std 1003.1) defines a character set for portable filenames (however, it does indicate that case should be preserved). At least it removes spaces and other "special" characters from the set.
The set is, from memory: [a-zA-Z0-9_-.]