Once I import a file, and I corrupt it (having anki synchronisation turned off) the deck is still fully functional. So where is a copy of the imported file stored? What folder in macos or windows?
As explained in the relevant section of the manual, all Anki's files are stored at the following places:
Windows: %APPDATA\Anki2
MacOS: ~/Library/ApplicationSupport/Anki2
However, there are some gotchas. If you need more details, see the link.
Besides that, beware that decks are not stored as individual files: the whole Anki collection is a single database, so you can't copy decks around. However, you can still export decks from within Anki.
Related
I am having trouble with D's package.d feature. I have my package.d file:
module dew;
public
import dew.util;
I then have util.d:
module dew.util;
struct Size
{
int width;
int height;
}
When I try to use it in another project, it gives me this error:
I know that this should work, because projects on GitHub use it, specifically bindbc-sdl.
Your code is correct. By the error message I'm assuming your project layout with DUB probably looks something like this:
dew/
dub.sdl/dub.json
source/
package.d
util.d
However this would be incorrect. By default DUB only uses import folders instead of specifying all input source files, so the compiler attempts to discover the files from the import paths in the filesystem. The compiler dumped the import paths it searches in in your error screenshot.
The rough compiler equivalent to what it's doing right now is (see dub -v)
dmd source/app.d -Isource -I../dew/source
You need to imagine that the path whatever is written in there is invisible to the compiler, so the dew/source/ path is just some opaque string the compiler doesn't interpret. Now all that the compiler sees is a package.d and util.d in its search paths. However for imports from folders to function, they need to correspond to the filesystem layout, i.e. you need to have a folder named dew where your files are stored in.
So a module named dew.util would correspond to dew/util.d
And your package dew would correspond to dew/package.d
So for your dub project that means essentially that you need to move all your source files into
dew/
dub.sdl/dub.json
source/
dew/
package.d
util.d
Alternatively it would be possible to manually specify all files one-by-one where the compiler can look them up because of the module declarations at the top of the file, however you lose the convenience of module names being mapped to filesystem paths, which might be something other community-made D source tools and IDEs are expecting. On the command line that would be equivalent to
dmd source/app.d ../dew/source/dew/package.d ../dew/source/dew/util.d
I want to know how to do something like the following...
I have a directory, let's call this directory "D:\Folder\" and it has some file types like .json, .lua, etc and I need to be able to put the appropriate files in a table based off their file type. How do I do this via Lua without external libraries? Also, how can I get other information on the files, like size, date modified, etc via lua and store that info?
As Yu Hao said in the comment, Lua by itself doesn't have any methods to get the list of files in a folder or access attributes of those files. In terms of external libraries, you can use Lua Filesystem module that has everything you need or winapi if you are looking for Windows-specific solution. Both are small libraries that can be compiled quite easily using mingw.
If you are looking for Windows-only-no-external-library solution, you should be able to run "dir" command and process its results using io.popen. You can parse the captured output and get file names, sizes, and dates based on that. You can also get the file size by using file:seek, but since you may be parsing anyway, you can get it all from the output. I don't think there is anything much simpler than that.
how about searching for a pattern that represents any and all characters a file could posses and then .file_type...and then run that through io.open for example...possible?
You won't be able to "guess" filenames by enumerating possible symbol combinations simply because this .... will .... take .... a .... very .... long .... time.
I am programming with C using Code::Blocks. My project is divided in 3, header, implementation and main.
Whenever I used a project, apart from the source files and the bin and obj folders I had a .depend and a .layout file. All good.
Now I created a new project, and just copied -> pasted everything new in source files. I did this twice.
For each case, I have a .c.save file, which has the same name of the implementation file (ie. the implementation file is called imp, then the file is called imp.c.save). I asked a friend of mine what it might be, and he said I need to beware as he had two random files created, which prevented him from building correctly (he got a stupid error). When the files were deleted everything went back to normal.
I did a short test of the program and I can find nothing different. I am hesitant to delete it since this cropped up twice in two cases, but I don't want to compromise my coding.
Tried to google and I didn't find much. Any help?
Well, it didn't cause any problems so I assuming it is an autosave file.
I have a C program built using Autotools. In src/Makefile.am, I define a macro with the path to installed data files:
AM_CPPFLAGS = -DAM_INSTALLDIR='"$(pkgdatadir)"'
The problem is that I need to run make install before I can test the binary (since it needs to be able to find the data files).
I can define another macro with the path of the source tree so the data files can be located without installing:
AM_CPPFLAGS = -DAM_INSTALLDIR='"$(pkgdatadir)"' -DAM_TOPDIR='"$(abs_top_srcdir)"'
Now, I would like the following behavior:
If the binary was installed via make install, use AM_INSTALLDIR to fetch data files.
If the binary was not installed, use AM_TOPDIR to fetch data files.
Is this possible? Is there a better approach to this problem?
What I do (in https://http://rhdunn.github.com/cainteoir/) is:
const char *basedir = getenv("CAINTEOIR_DATADIR");
if (!basedir)
basedir = DATADIR "/" PACKAGE; // e.g. /usr/share/cainteoir-engine
and then run it (in tests/harness.py) as:
CAINTEOIR_DATADIR=`pwd`/data src/apps/metadata/metadata test_file.epub
This then allows the user to change the location of where to get the data if they wish.
Making the program able to use a run-time configuration as proposed by reece is a good solution. If for some reason you do not want it to be configurable at run-time, a common solution is to build a test binary differently than the installed binary (there are other problems associated with this, in particular ensuring that the program you are testing has behavior that is consistent with the program that is installed.) An easy way to do that is something like:
bin_PROGRAMS = foo
check_PROGRAMS = test-foo
test_foo_SOURCES = $(foo_SOURCES)
AM_CPPFLAGS = -DINSTALLDIR='"$(pkgdatadir)"'
test_foo_CPPFLAGS = -DINSTALLDIR='"$(abs_top_srcdir)"'
Rather than using a binary with a different name, you might want to have a dedicated tests directory and build the program using the same name as the original.
Note that I've changed the name from AM_INSTALLDIR to INSTALLDIR. Automake reserves names
beginning with "AM_" for its own use, and by using that name you are stomping on Automake's
namespace.
A bit of additional information first: The data files are under active development, and I have various scripts that need to call binaries using local data files, whereas installed binaries should use stable, installed data files.
My original solution made use of an environment variable, as proposed by reece. But I didn't want to manage setting up environment variables in various places, and I didn't want any risk of the wrong data files being picked up due to a mistake.
So the solution I ended up with was to define macros for both locations at build time, and add a flag (-local) to the binaries to force local data files to be used.
I am trying to understand the different file extensions for the pfxplus powerflex database. Could someone please help telling me briefly what each file is for?
.k1
.k2
.k3
...
.k13
.k14
.k15
.fd
.def
.hdr
.prc
.pc3
Data files:
OK, so .dat is the data file.
.k1 -> .k15 are index files.
These are the critical data files for runtime. (Combined with filelist.cfg or pffiles.tab similar to define what files are available overall).
.fd is the file definition, needed for compiling programs
.tag (which you did not mention) is needed only if you need to access field names at run time (such as using a generic report tool)
.def is the file definition in human readable form, and is not needed by any process but is produced so a programmer or user can understand the file structure.
Run time:
The .ptc files are the compiled threads interpreted by the powerflex runtime.
The .prc file is a resource file that is used at runtime in conjunction with the .ptc file - it defines how a character based program is to look in a gui environment in "g-mode". It was the cheap way to upgrade character based programs when windows first started getting popular usage.
.hdr and .pc3 escape me at the moment, but are vaguely familiar - .hdr is probably another data file used with compression or special field types for later versions of pfxplus. .pc3 may in fact be the .ptc files...