SD Card usage is essential for "database.db" file in my application in blackberry? - database

I am working on Blackberry database dependent application. On click of button i just show some useful data on other screen by fetching data from .db file stored in my sd card. Initially I provide that ".db" file from my ASSETS.
Now, i have seen some users review, they are getting problem in using SD-Card.
My question is "Is is possible to use sql database/.db file without using SD-Card in my application in blackberry"
Please let me know if it is possible....!

There are two separated filesystems supported. The first - internal device filesystem, the second - memory sd card filesystem.
Internal device filesystem does not depend on memory sd card and it is possible to create file there. But note that if your database consumes all available internal memory then device becomes mad.
Internal memory is an important resource to support vital operating system activies and when there is a shortage of this kind of memory then weird things occur, like sudden restarts, freezing issues etc.

It's a little bit more complicated than just having write access to the filesystem. Only certain types of internal storage support SqLite. See BlackBerry SqLite db creation: “filesystem not ready”

Related

How are files on network mapped drives handled locally by a windows host?

This is by no means a "give me the solution" question, but more to gain a higher understanding. Please feel free to point to references where I can learn more about this, I've tried searching and all I get are how to's for setting up and accessing network drives.
I want to be able to monitor a file on a windows machine, but the file sits on a shared drive hosted locally. If it is manipulated by another machine, is there a process I can look for that will indicate that the file may be accessed by a resource elsewhere on the network? I understand that the host machine must be available in order to access the file in the first place, but what processes are called to actually manipulate the file. Is this below the OS level? I have access to a minifilter driver that I can ask a more experienced developer on the team to help me with if need be.

uboot using FIT to upgrade filesystem

I want to upgrade my systems in the field using the uboot FIT images.
My system is a custom firmware, booted by uboot. So far the FIT filesystem works very good. It provides a shasum verified upload. I am using uboot scripts to update stuff on the target.
One intriguing type defined in uboot docs is type "filesystem". The actual content could be several things, like maybe tar'ed bunch of files, or an actual collection of separate individual files in one chunk in the FIT.
In another FIT question, Tom Rini implied that a filesystem is really just a binary blob. What goes into it is my problem and that uboot could then just mmc write ... or usb write ... to create the new filesystem on some partition. Is this really the case?
How can I build a filesystem (say FAT), on a host build computer for packaging with FIT?
Thanks, Steve
The creation of a filesystem image will depend on the filesystem itself. In many cases, build systems such as OpenEmbedded or buildroot can help you here as they will create the images for you.

Adding Capability to NFS Server - Compressing/Decompressing Stored/Retrieved Files

I need to build a custom Suse Linux NFS Server that does compression on certain files that are stored on the disk, and decompresses files as they are read from the disk. This needs to be transparent to the remote users of the file system, meaning that if a user saves a 10MB file named XYZZY.tif on /archiveDirectoryOnNFSServer, that when they do a ls -l on that mounted directory, they will see a 10MB file called XYZZY.tif, even though the actual file stored on the disk on the NFS server will be XYZZY.tif.compressed, and it will be 2MB in size.
I'm expecting that I need to build this as a driver that sits below the NFS Server software stack, but, I'm having difficulty finding where to start. Are there existing NFS Servers that provide this level of customization through APIs? Will I need to modify source of an open source NFS Server, and, if so, is there one that would be easiest to start with, and are they modularly structured such that this will be straight forward? I'm having difficult locating relevant content on the internet, and any pointers will be greatly appreciated.
IMO that kind of functionality is absolutely not the NFS server's responsibility (an nfs server should, well, serve files over nfs), but the underlying filesystem's. However, there's not that much choice in Linuxland but you could start by checking out fusecompress and btrfs.
This post is a bit old so you may already be aware of some options here, but there are a couple others (both for server side).
http://zfsonlinux.org/
zfs filesystem has built-in compression. I typically use lzjb as it is the fastest compression algorithm and does a reasonable job (MySQL DB's get 2-4x compression, filesystems with non-compressed data get around 4). you have a choice of algorithm depending on how much CPU time you wish to offer the compression.
if you want different file types compressed then you may consider laying gluster on top of a set of zfs filesystems.
gluster will allow you to store certain file types (by extension) on different underlying filesystems.
in this case, you specify the underlying filesystem as a zfs volume with the particular options you need (for example, .zip and .png go on an uncompressed filesystem, while things you write once and read many like static html files might go on a higher compression--you'll pay once when it's written but reads should be really fast since it scans fewer disk blocks and decompression is very fast)
zfs will manage the nfs mounts if you use it as your nfs server--you wont want this if you lay gluster on top.
it's easy to specify dynamically other attributes per filesystem (atime/noatime, # of copies if you want redundancy other than your normal raid, you can add SSD's as cache devices to get more performance).
in these solutions, you still send the full uncompressed files over the wire, so it doesn't make up for network performance but gives a lot of options if you're trying to speed up Disk IO or get more utilization out of your drives.

Windows display driver hooking, 64 bit

Once I've written a sort of a driver for Windows, which had to intercept the interaction of the native display driver with the OS. The native display driver consists of a miniport driver and a DLL loaded by win32k.sys into the session space. My goal was to meddle between the win32k.sys and that DLL. Moreover, the system might have several display drivers, I had to hook them all.
I created a standard WDM driver, which was configured to load at system boot (i.e. before win32k). During its initialization it hooked the ZwSetSystemInformation, by patching the SSDT. This function is called by the OS whenever it loads/unloads a DLL into the session space, which is exactly what I need.
When ZwSetSystemInformation is invoked with SystemLoadImage parameter - one of its parameters is the pointer to a SYSTEM_LOAD_IMAGE structure, and its ModuleBase is the module base mapping address. Then I analyze the mapped image, patch its entry point with my function, and the rest is straightforward.
Now I need to port this driver to a 64-bit Windows. Needless to say it's not a trivial task at all. So far I found the following obstacles:
All drivers must be signed
PatchGuard
SSDT is not directly exported.
If I understand correctly, PatchGuard and driver signing verification may be turned off, the driver should be installed on a dedicated machine, and we may torture it the way we want.
There're tricks to locate the SSDT as well, according to online sources.
However recently I've discovered there exists a function called PsSetLoadImageNotifyRoutine. It may simplify the task considerably, and help avoid dirty tricks.
My question are:
If I use PsSetLoadImageNotifyRoutine, will I receive notifications about DLLs loaded into the session space? The official documentation talks about "system space or user space", but does "system space" also includes the session space?
Do I need to disable the PatchGuard if I'm going to patch the mapped DLL image after it was mapped?
Are there any more potential problems I didn't think about?
Are there any other ways to achieve what I want?
Thanks in advance.
Do I need to disable the PatchGuard if I'm going to patch the mapped DLL image after it was mapped?
To load any driver on x64 it must be signed. With admin rights you can disabled PatchGuard and I personally recommend using DSEO, a GUI application made for this. Or you can bypass PatchGuard by overwriting the MBR (or BIOS), although this is typically considered a bootkit - malware.

Read data from damaged media

Is it possible to read damaged media (cd, hdd, dvd,...) even if windows explorer bombs out?
What I mean to ask is, whether there is a set of APIs or something that can access the disk at a very low level (below explorer?) and read whatever can be retrieved even if it is only partial, especially if you can still see the file is there from explorer, but can't do anything with it because it is damaged somehow (scratch on cd, etc)?
The main problem with Windows Explorer is that it doesn't support resuming copying after a read error. Most superficially scratched CDs, for example, will fail on different areas of the disk every time you eject and reinsert them.
Therefore, with a utility that supports resuming copy operations, it is possible to read the entire contents of a damaged CD with by doing "eject/reload/resume" a few times.
In fact, this is what a utility I wrote does, and I've never needed anything fancier to read scratched disks. (It simply uses ReadFile and WriteFile.)
One step lower would be opening the raw partition (i.e. disk image) by passing a string such as "\.\F:" (note: slashes are literal here) to CreateFile. It would allow you to read raw sectors from a drive, but reconstructing files from that data would be hard.
In fact, the "\.\" syntax allows you to open devices in the "\GLOBAL??" branch of the Windows Object Manager namespace as if they were files. It's not unlike calling dd with /dev/x as a parameter. There is also a "\Device" branch, but that's only accessible via DeviceIoControl() (i.e. ioctl()), meaning there's no simple ReadFile()/WriteFile() interface.
Anything lower level than that would be device-specific, I guess; like reading raw CD-ROM data (including ECC bits) the way some CD-burning programs do. You'd have to do some research on the specific media (CD, flash, DVD) and what your hardware allows you to do on them.
Note: The backslashes seem to get lost on the way to the web page; you need to pass "backslash backslash dot backslash DeviceName" to CreateFile. You need to escape them, too, of course.
If you want to do it, do it from the Linux side - see: http://sourceforge.net/projects/monkeycity/ opensource
or ready made app and freeware too: http://www.theabsolute.net/sware/dskinv.html
the first step is dd_rescue. After that, you're free to try anything to reconstruct the data.
And there's GNU ddrescue
GNU ddrescue is a data recovery tool. It copies data from one file or block device (hard disc, cdrom, etc) to another, trying to rescue the good parts first in case of read errors.
Make sure to use the 3-arg version (manual):
ddrescue [options] infile outfile [mapfile]
That is, do use a mapfile even if it's optional, because:
If you use the mapfile feature of ddrescue, the data is rescued very efficiently, (only the needed blocks are read). Also you can interrupt the rescue at any time and resume it later at the same point. The mapfile is an essential part of ddrescue's effectiveness. Use it unless you know what you are doing.
And it's also included in Cygwin and Homebrew.
I don't know what layer exists between Windows Explorer and the Win32 APIs. You can try to write a program with the Win32 File I/O stuff. If that doesn't work, then you have to write your own device driver to get any lower.
I've had some luck from the linux side, or using BartPE (http://www.nu2.nu/pebuilder/), but just seeing the file doesn't always mean the file is going to be recoverable, whether you're trying from Windows or Linux. You're best bet might be to use a trial of a recovery program.
I have had two disks start to disintegrate on me. From the pattern of unreadable sectors I think they had internal flaking of their emulsion. WinXP Explorer just threw up its hands and said the drive didn't even exist.
In both cases I used "GetDataBack for NTFS" from Runtime Software (http://www.runtime.org/). You can download a free trial which will show you what you could get back if you paid for it. When I bought it it was $49, but I see it is now $79.
This program is amazing. It's not necessarily fast as it will reread some sectors over and over, trying to get a consensus value from multiple tries, but when it's done you can get back stuff that you thought was gone forever. I had one drive that it took over 10 hours to analyze, but when it was done I got back over 97% of a 500GB drive. Definitely worth the price.
Another great tool is Beyond Compare. I have rev 2.5.3, but it is currently at 3.?? and costs $30. They have a full-functionality, 30-day trail. It does a great job of copying large quantities of files (and only those that need to be copied) and, unlike Explorer, it doesn't blow up if something fails. It's sort of like a visual rsync for Windows, if you're familiar with that program from the Samba people.
I have no connection with either of the comapnies mentioned other than being a very satisfied customer.
The gold standard for recovering data from a magnetic storage device would have to be SpinRite. It's a commerical app though, so you probably wouldn't learn much from it.
If you have a Linux machine around, I can recommend dvdisaster. It is originally meant for creating error correction files, but it also reads DVDs into an image and ignores read errors; and you can use different drives one after another to get missing sectors filled in the image.

Resources