I'm looking to store images for an application in an MSSQL database. (I understand that there is some debate about whether this or file system storage is better; that's another thread though.) I'm looking at doing something similar to http://forum.codecall.net/topic/40286-tutorial-storing-images-in-mysql-with-php/ but in CodeIgniter, something along the lines of:
foreach ($_FILES as $upload_name => $info) {
if ($info['name']) {
// Temporary file name stored on the server
$tmpName = $info['tmp_name'];
// Read the file
$fp = fopen($tmpName, 'r');
$data = fread($fp, filesize($tmpName));
fclose($fp);
//model code consolidated here for ease of question-asking
$db = $this->load->database();
$stmt = $db->insert('my_table', array('image' => $data));
}
}
My question is mostly along the lines of security. Basically is there any particular concerns I should have for sanitizing image binary data inserts versus other sorts of string data? I took out the addslashes() in the code from the site linked above because I know CI's active records do some sanitization on their own but I don't know if it is better to have it (or do some other prep work altogether).
If I understand your question correctly, you should not have to worry about it as long as you store the file_type (The file's Mime type) with it and fore the Mime type with the binary data. Then whenever you handle the data you make sure and use it with the proper Mime type so even if they upload a script of virus you can make sure it is only rendered as an image instead of letting your server or the browser handle it.
Other than this I do not think you will need to pull the upload into memory and try and scrub it.
Related
I am aiming to take a file a user attaches through an Lightning Component and create a document object containing the data.
So far I have overcome the request size limits by chunking the data being uploaded into 1MB chunks. When the Apex Aura method receives these chunks of data it will either create a new document (if it is the first chunk), or will retrieve the existing document and add the new chunk to the end.
Data is received Base64 encoded, and then decoded server-side.
As the document data is stored as a Blob, the original file contents will be read as a String, and then appended with the chunk received. The new contents are then converted back into a Blob to be stored within the ContentVersion object.
The problem I'm having is that strings in Apex have a maximum length of 6,000,000 or so. Whenever the file size exceeds 6MB, this limit is hit during the concatenation, and will cause the file upload to halt.
I have attempted to avoid this limit by converting the Blob to a String only when necessary for the concatenation (as suggested here https://developer.salesforce.com/forums/?id=906F00000008w9hIAA) but this hasn't worked. I'm guessing it was patched because it's still technically allocating a string larger then the limit.
Code's really simple when appending so far:
ContentVersion originalDocument = [SELECT Id, VersionData FROM ContentVersion WHERE Id =: <existing_file_id> LIMIT 1];
Blob originalData = originalDocument.VersionData;
Blob appendedData = EncodingUtil.base64Decode(<base_64_data_input>);
Blob newData = Blob.valueOf(originalData.toString() + appendedData.toString());
originalDocument.VersionData = newData;
You will have hard time with it.
You could try offloading the concatenation to asynchronous process (#future/Queueable/Schedulable/Batchable), they'll have 12MB RAM instead of 6. Could buy you some time.
You could try cheating by embedding an iframe (Visualforce or lightning:container tag? Or maybe a "canvas app") that would grab your file and do some manual JavaScript magic calling normal REST API for document upload: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_sobject_insert_update_blob.htm (last code snippet is about multiple documents). Maybe jsforce?
Can you upload it somewhere else (SharePoint? Heroku?) and have that system call into SF to push them (no Apex = no heap size limit). Or even look "Files Connect" up.
Can you send an email with attachments? Crude but if you write custom Email-to-Case handler class you'll have 36 MB of RAM.
You wrote "we needed multiple files to be uploaded and the multi-file-upload component provided doesn't support all extensions". That may be caused by these:
In Experience Builder sites, the file size limits and types allowed follow the settings determined by site file moderation.
lightning-file-upload doesn't support uploading multiple files at once on Android devices.
if the Don't allow HTML uploads as attachments or document records security setting is enabled for your organization, the file uploader cannot be used to upload files with the following file extensions: .htm, .html, .htt, .htx, .mhtm, .mhtml, .shtm, .shtml, .acgi, .svg.
I was thinking if I store a video or a movie and open that box will that video will be stored in my RAM or else it just load from ROM. I am a bit confused: Can anyone explain this to me?
I think you have misunderstood the concept of Database.
Any Database solution is to only store pure informational organized data. Not to store large files such as media, documents, or images.
On the contrary, storage need not be organized, all files can exist in one folder.
So, any database solution you use, always store Data Types.
In this case you can have a Data Model, which is also an essential thing in using a Database.
#HiveType(typeId: 0)
class Movie extends HiveObject {
#HiveField(0)
String name;
#HiveField(1)
int path;
}
Since Hive supports Dart objects, you don't have to convert toJson or any such for string the Data.
So when you have the file fetched from Storag, you can get the path using path_provider or from the File itself, and then Create a Object
File file = await // get the movie file using any means
final path = file.path
var box = await Hive.openBox('Movies');
var m = Movie()
..name = 'Batman Begins'
..path = path ;
box.add(m);
m.save();
Hope this clears your doubt.
Copy/save your video/media files in the Local File Storage and save file path in Hive Box.
Whenever you need get path from hive then get the file from local storage using that path.
Context: I am attempting to automate the inspection of eps files to detect a list of attributes, such as whether the file contains locked layers, embedded bitmap images etc.
So far we have found some of these things can be detected via inspection of the raw eps file data and its accompanying metadata (similar to the information returned by imagemagick.) However it seems that in files created by illustrator 9 and above the vast majority of this information is encoded within the "AI9_DataStream" portion of the file. This data is encoded via ascii85 and compressed. We have found some success in getting at this data by using: https://github.com/huandu/node-ascii85 to decode and nodes zlib library to decompress / unzip. (Our project is written in node / javascript). However it seems that in roughly half of our test cases / files the unzipping portion fails, throwing Z_DATA_ERROR / "incorrect data check".
Our method responsible for trying to decode:
export const decode = eps =>
new Promise((resolve, reject) => {
const lineDelimiters = /\r\n%|\r%|\n%/g;
const internal = eps.match(
/(%AI9_DataStream)([\s\S]*?)(AI9_PrivateDataEnd)/
);
const hasDataStream = internal && internal.length >= 2;
if (!hasDataStream) resolve('');
const encoded = internal[2].replace(lineDelimiters, '');
const decoded = ascii85.decode(encoded);
try {
zlib.unzip(decoded, (err, buffer) => {
// files can crash this process, for now we need to allow it
if (err) resolve('');
else resolve(buffer.toString('utf8'));
});
} catch (err) {
reject(err);
}
});
I am wondering if anyone out there has had any experience with this issue and has some insight into what might be causing this and whether there is an alternative avenue to explore for reliably decoding this data. Information on this topic seems a bit sparse so really anything that could get us going in the right direction would be very much appreciated.
Note: The buffers produced by the ascii85 decoding all have the same 78 9c header which should indicate standard zlib compression (and it does in fact decompress into parsable data about half the time without error)
Apparently we were misreading something about the ascii85 encoding. There is a ~> delimiter at the end of the encoded block that needs to be omitted from the string before decoding and subsequent unzipping.
So instead of:
/(%AI9_DataStream)([\s\S]*?)(AI9_PrivateDataEnd)/
Use:
/(%AI9_DataStream)([\s\S]*?)(~>)/
And you can get to the correct encoded / compressed data. So far this has produced human readable / regexable data in all of our current test cases so unless we are thrown another curve that seems to be the answer.
The only reliable method for getting content from PostScript is to run it through a PostScript interpreter, because PostScript is a programming language.
If you stick to a specific workflow with well understood input, then you may have some success in simple parsing, but that's about the only likely scenario which will work.
Note that EPS files don't have 'layers' and certainly don't have 'locked' layers.
You haven't actually pointed to a working example, but I suspect the content of the AI9_DataStream is not relevant to the EPS. Its probably a means for Illustrator to include its own native file format inside the EPS file, without it affecting a PostScript interpreter. This is how it works with AI-produced PDF files.
This means that when you reopen the EPS file with Adobe Illustrator, it ignores the EPS and uses the embedded native file, which magically grants you the ability to edit the file, including features like layers which cannot be represented in the EPS.
In my Grails application I need to create a file in current system in which I need to save information fetched from table in database. How to do this from within controller action? I don't have any idea of it.
I have created file as
File file=new File("file name.txt")
file.createNewFile();
then I have wrote values of MySQL database table fields in it as:
file<<patient.id
file<<patient.name
.
.
.
it stores data like continuous text but I want to have a .doc file in which data should get stored in table. I found Apache's POI for creating doc file but I am not getting how it works and how I should use it.
Not sure exactly what you want to store in a file but below is an example of how to easly write a String to a file using Apache-commons-io Which should be included in grails
import org.apache.commons.io.FileUtils;
class SomeController{
def writeToFile = {
def data = getSomeStringData();
def fileStore = new File("./path/to/files/ControllerOutput_${new Date()}.txt");
fileStore.createNewFile();
FileUtils.writeStringToFile(fileStore, data);
println("your file was created # {fileStore.absolutePath} and is ${fileStore.length()} bytes");
}
}
Does this help? If not, you need to explain exactly what your looking for.
This is a comment to Michael's answer (unfortunately I still don't have the reputation to reply on answers).
If you're struggling around the problem how to specifiy the relative path from within your controller's context, this might help you:
So if you have following folder you want to read/write files from/into"..
/myproject/web-app/temp/
you can access the file like this:
import org.codehaus.groovy.grails.commons.ApplicationHolder as AH
// getResource references to the web-app folder as root folder
Resource resource = AH.getApplication().getParentContext().getResource("/temp/myfile.txt)
I have often noticed that when database insert for a model fails, data loaded previously continue to stay in the database. So when you try to load the same fixture file again, it gives an error.
Is there any way the DATA:LOAD process can be made ATOMIC, i.e. GO or NO-GO for all data, so that data is never inserted half-way.
Hopefully that should work :
Write a task that do the same as data:load but wrap it in :
$databaseManager = new sfDatabaseManager($this->configuration);
$conn = $sf_database_managaer->getDatabase('doctrine')->getDoctrineConnection();
try{
...............
}catch(Exception $e){ //maybe you can be more specific about the exception thrown
echo $e->getMessage();
$conn->rollback();
}
Fixtures are meant for loading initial data, which means that you should be able to build --all --and-load, or in other words, clear all data and re-load fixtures. It doesn't take any longer.
One option you have is to break your fixtures into multiple files and load them individually. This is also what I'd do if you first need to load large amounts of data via a script or from a CSV (i.e. something bigger than just a few fixtures). This way you don't need to redo it if you had a fixtures problem somewhere else.