PDF Error Opening Error Retrieved From Database to Virtual Folder - sql-server

I am trying to Retrieve Binary Data [PDF] From SQL Server to My Virtual Directory:
string filePaths = System.Web.HttpContext.Current.Server.MapPath("~/TempPDF/");
And I have the following code to Write Data Into filePath, its downloading fine but when I am trying to open in PDF its giving me error. "Adobe Reader could not open "FileName.pdf" because it is either not supported or because file has been damaged"
My Code :
string last = fileName.Substring(fileName.LastIndexOf('.') + 1);
if (last == "pdf")
{
using (System.IO.FileStream fs = new System.IO.FileStream(filePaths+fileName, System.IO.FileMode.CreateNew ))
{ // use a binary writer to write the bytes to disk
using (System.IO.BinaryWriter bw = new System.IO.BinaryWriter(fs))
{
bw.Write(Data, 0, Data.Length);
//bw.Write(Data);
bw.Flush();
bw.Close();
}
}
}

Related

Downloads folder shared file write permissions problem

I am trying to get an offline backup function working on Android 12. It has worked for years on previous versions of Android, 6 & 8. It is required as the size of the backup can often exceed 25mb. I am using a Samsung A7 Lite for this testing to ensure Android 12 compliance. Essentially the function initially creates a backup folder in the downloads folder if it does not exist. It then writes a backup file to that folder. All goes well. I can repeat the function any number of times without there being a problem. It retains father and grandfather versions for security. However, if I try to use the same function where there are existing files the following day, I am presented with a java.io.FileNotFoundException, open failed EACCES (Permission denied). This whole situation appears very illogical, and does not appear to follow the documentation on accessing the downloads folder. If I manually delete the backup file from the previous day, the process succeeds, similarly if I delete the backup directory within the downloads folder, the backup proceeds successfully. The app asks the user for the appropriate permissions which I believe are read and write external storage. Can anybody identify what I am doing wrong in this environment.
The code is below.
String path = "";
// if no external, set to download
if (path.equals("")) {
File systemPath = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS);
path = systemPath.getAbsolutePath();
}
// set up backup subdirectory
path = path + "/backup";
// check if path exists
File backupDir = new File(path);
if (!backupDir.exists()) {
try {
backupDir.mkdirs();
MediaScannerConnection.scanFile(this, new String[]{backupDir.getAbsolutePath()}, null, null);
}
catch(Exception e){
e.printStackTrace();
}
}
// first get rid of old backup files leaving at least 2 older versions
File backupFile = new File(path,"backup3.bkp");
if (backupFile.exists())
backupFile.delete();
for (int i = 3;i > 1;i--){
File renameBackupFile = new File(path,"backup" + i + ".bkp");
File existBackupFile = null;
if (i == 2)
existBackupFile = new File(path,"backup.bkp");
else
existBackupFile = new File(path,"backup" + (i - 1) + ".bkp");
if (existBackupFile.exists()) {
try {
existBackupFile.renameTo(renameBackupFile);
} catch (Exception e) {
String message = e.toString();
}
}
}
// create a new backup
String fileName = "backup.bkp";
String backup = path + "/" + fileName;
FileInputStream dataBaseFile = new FileInputStream(DB_PATH);
File newBackupFile = new File(backup);
newBackupFile.createNewFile();
FileOutputStream backupStream = new FileOutputStream(newBackupFile);
//transfer bytes from the inputfile to the outputfile
byte[] buffer = new byte[1024];
int length;
while ((length = dataBaseFile.read(buffer)) > 0) {
backupStream.write(buffer, 0, length);
}
//Close the streams
backupStream.flush();
backupStream.close();
dataBaseFile.close();
MediaScannerConnection.scanFile(this, new String[]{newBackupFile.getAbsolutePath()}, null, null);

Is there a way in selenium to upload the last downloaded file with dynamic name?

The problem I am facing is I have a file which is having a dynamic number at the last.
For example: Tax_subscription_124.pdf which changes everytime.
Can I upload this particular file as currently I am downloading it in a particular location but not able to upload the same due to dynamic name?
The following code returns the last modified file or folder:
public static File getLastModified(String directoryFilePath)
{
File directory = new File(directoryFilePath);
File[] files = directory.listFiles(File::isFile);
long lastModifiedTime = Long.MIN_VALUE;
File chosenFile = null;
if (files != null)
{
for (File file : files)
{
if (file.lastModified() > lastModifiedTime)
{
chosenFile = file;
lastModifiedTime = file.lastModified();
}
}
}
return chosenFile;
}
Note that it required Java 8 or newer due to the lambda expression.
After that
WebElement fileInput = driver.findElement(By.name("uploadfile"));
fileInput.sendKeys(chosenFile);

Eclipse PDE: Get full path of an external file open in Workbench

I am writing an Eclipse Plugin which requires me to get full path of any kind of file open in the Workspace.
I am able to get full path of any file which is part of any Eclipse project. Code to get open/active editor file from workspace.
public static String getActiveFilename(IWorkbenchWindow window) {
IWorkbenchPage activePage = window.getActivePage();
IEditorInput input = activePage.getActiveEditor().getEditorInput();
String name = activePage.getActiveEditor().getEditorInput().getName();
PluginUtils.log(activePage.getActiveEditor().getClass() +" Editor.");
IPath path = input instanceof FileEditorInput ? ((FileEditorInput) input).getPath() : null;
if (path != null) {
return path.toPortableString();
}
return name;
}
However, if any file is drag-dropped in Workspace or opened using File -> Open File. For instance, I opened a file from /Users/mac/log.txt from File -> Open File. My plugin is not able to find location of this file.
After couple of days search, I found the answer by looking at the source code of Eclipse IDE.
In IDE.class, Eclipse tries to find a suitable editor input depending on the workspace file or an external file. Eclipse handles files in workspace using FileEditorInput and external files using FileStoreEditorInput. Code snippet below:
/**
* Create the Editor Input appropriate for the given <code>IFileStore</code>.
* The result is a normal file editor input if the file exists in the
* workspace and, if not, we create a wrapper capable of managing an
* 'external' file using its <code>IFileStore</code>.
*
* #param fileStore
* The file store to provide the editor input for
* #return The editor input associated with the given file store
* #since 3.3
*/
private static IEditorInput getEditorInput(IFileStore fileStore) {
IFile workspaceFile = getWorkspaceFile(fileStore);
if (workspaceFile != null)
return new FileEditorInput(workspaceFile);
return new FileStoreEditorInput(fileStore);
}
I have modified the code posted in the question to handle both files in Workspace and external file.
public static String getActiveEditorFilepath(IWorkbenchWindow window) {
IWorkbenchPage activePage = window.getActivePage();
IEditorInput input = activePage.getActiveEditor().getEditorInput();
String name = activePage.getActiveEditor().getEditorInput().getName();
//Path of files in the workspace.
IPath path = input instanceof FileEditorInput ? ((FileEditorInput) input).getPath() : null;
if (path != null) {
return path.toPortableString();
}
//Path of the externally opened files in Editor context.
try {
URI urlPath = input instanceof FileStoreEditorInput ? ((FileStoreEditorInput) input).getURI() : null;
if (urlPath != null) {
return new File(urlPath.toURL().getPath()).getAbsolutePath();
}
} catch (MalformedURLException e) {
e.printStackTrace();
}
//Fallback option to get at least name
return name;
}

read cloud storage content with "gzip" encoding for "application/octet-stream" type content

We're using "Google Cloud Storage Client Library" for app engine, with simply "GcsFileOptions.Builder.contentEncoding("gzip")" at file creation time, we got the following problem when reading the file:
com.google.appengine.tools.cloudstorage.NonRetriableException: java.lang.RuntimeException: com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1#1c07d21: Unexpected cause of ExecutionException
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:87)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:129)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:123)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl.read(SimpleGcsInputChannelImpl.java:81)
...
Caused by: java.lang.RuntimeException: com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1#1c07d21: Unexpected cause of ExecutionException
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:101)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:81)
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:75)
... 56 more
Caused by: java.lang.IllegalStateException: com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2#1d8c25d: got 46483 > wanted 19823
at com.google.common.base.Preconditions.checkState(Preconditions.java:177)
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:418)
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:398)
at com.google.appengine.api.utils.FutureWrapper.wrapAndCache(FutureWrapper.java:53)
at com.google.appengine.api.utils.FutureWrapper.get(FutureWrapper.java:90)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:86)
... 58 more
What else should be added to read files with "gzip" compression to be able to read the content in app engine? ( curl cloud storage URL from client side works fine for both compressed and uncompressed file )
This is the code that works for uncompressed object:
byte[] blobContent = new byte[0];
try
{
GcsFileMetadata metaData = gcsService.getMetadata(fileName);
int fileSize = (int) metaData.getLength();
final int chunkSize = BlobstoreService.MAX_BLOB_FETCH_SIZE;
LOG.info("content encoding: " + metaData.getOptions().getContentEncoding()); // "gzip" here
LOG.info("input size " + fileSize); // the size is obviously the compressed size!
for (long offset = 0; offset < fileSize;)
{
if (offset != 0)
{
LOG.info("Handling extra size for " + filePath + " at " + offset);
}
final int size = Math.min(chunkSize, fileSize);
ByteBuffer result = ByteBuffer.allocate(size);
GcsInputChannel readChannel = gcsService.openReadChannel(fileName, offset);
try
{
readChannel.read(result); <<<< here the exception was thrown
}
finally
{
......
It is now compressed by:
GcsFilename filename = new GcsFilename(bucketName, filePath);
GcsFileOptions.Builder builder = new GcsFileOptions.Builder().mimeType(image_type);
builder = builder.contentEncoding("gzip");
GcsOutputChannel writeChannel = gcsService.createOrReplace(filename, builder.build());
ByteArrayOutputStream byteStream = new ByteArrayOutputStream(blob_content.length);
try
{
GZIPOutputStream zipStream = new GZIPOutputStream(byteStream);
try
{
zipStream.write(blob_content);
}
finally
{
zipStream.close();
}
}
finally
{
byteStream.close();
}
byte[] compressedData = byteStream.toByteArray();
writeChannel.write(ByteBuffer.wrap(compressedData));
the blob_content is compressed from 46483 bytes to 19823 bytes.
I think it is the google code's bug
https://code.google.com/p/appengine-gcs-client/source/browse/trunk/java/src/main/java/com/google/appengine/tools/cloudstorage/oauth/OauthRawGcsService.java, L418:
Preconditions.checkState(content.length <= want, "%s: got %s > wanted %s", this, content.length, want);
the HTTPResponse has decoded the blob, so the Precondition is wrong here.
If I good understand you have to set mineType:
GcsFileOptions options = new GcsFileOptions.Builder().mimeType("text/html")
Google Cloud Storage does not compress or decompress objects:
https://developers.google.com/storage/docs/reference-headers?csw=1#contentencoding
I hope that's what you want to do .
Looking at your code it seems like there is a mismatch between what is stored and what is read. The documentation specifies that compression is not done for you (https://developers.google.com/storage/docs/reference-headers?csw=1#contentencoding). You will need to do the actual compression manually.
Also if you look at the implementation of the class that throws the exception (https://code.google.com/p/appengine-gcs-client/source/browse/trunk/java/src/main/java/com/google/appengine/tools/cloudstorage/oauth/OauthRawGcsService.java?r=81&spec=svn134) you will notice that you get the original contents back but you're actually expecting compressed content. Check the method readObjectAsync in the above mentioned class.
It looks like the content persisted might not be gzipped or the content-length is not set properly. What you should do is verify length of the compressed stream just before writing it into the channel. You should also verify that the content length is set correctly when doing the http request. It would be useful to see the actual http request headers and make sure that content length header matches the actual content length in the http response.
Also it looks like contentEncoding could be set incorrectly. Try using:.contentEncoding("Content-Encoding: gzip") as used in this TCK test. Although still the best thing to do is inspect the HTTP request and response. You can use wireshark to do that easily.
Also you need to make sure that GCSOutputChannel is closed as that's when the file is finalized.
Hope this puts you on the right track. To gzip your contents you can use java GZIPInputStream.
I'm seeing the same issue, easily reproducable by uploading a file with "gsutil cp -Z", then trying to open it with the following
ByteArrayOutputStream output = new ByteArrayOutputStream();
try (GcsInputChannel readChannel = svc.openReadChannel(filename, 0)) {
try (InputStream input = Channels.newInputStream(readChannel))
{
IOUtils.copy(input, output);
}
}
This causes an exception like this:
java.lang.IllegalStateException:
....oauth.OauthRawGcsService$2#1883798: got 64303 > wanted 4096
at ....Preconditions.checkState(Preconditions.java:199)
at ....oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:519)
at ....oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:499)
The only work around I've found is to read the entire file into memory using readChannel.read:
int fileSize = 64303;
ByteBuffer result = ByteBuffer.allocate(fileSize);
try (GcsInputChannel readChannel = gcs.openReadChannel(new GcsFilename("mybucket", "mygzippedfile.xml"), 0)) {
readChannel.read(result);
}
Unfortunately, this only works if the size of the bytebuffer is greater or equal to the uncompressed size of the file, which is not possible to get via the api.
I've also posted my comment to an issue registered with google: https://code.google.com/p/googleappengine/issues/detail?id=10445
This is my function for reading compressed gzip files
public byte[] getUpdate(String fileName) throws IOException
{
GcsFilename fileNameObj = new GcsFilename(defaultBucketName, fileName);
try (GcsInputChannel readChannel = gcsService.openReadChannel(fileNameObj, 0))
{
maxSizeBuffer.clear();
readChannel.read(maxSizeBuffer);
}
byte[] result = maxSizeBuffer.array();
return result;
}
The core is that you cannot use the size of the saved file cause Google Storage will give it to you with the original size, so it checks the sizes you expected and the real size and these are differents:
Preconditions.checkState(content.length <= want, "%s: got %s > wanted
%s", this, content.length, want);
So i solved it allocating the biggest amount possible for these files using BlobstoreService.MAX_BLOB_FETCH_SIZE. Actually maxSizeBuffer is only allocated once outsize of the function
ByteBuffer maxSizeBuffer = ByteBuffer.allocate(BlobstoreService.MAX_BLOB_FETCH_SIZE);
And with maxSizeBuffer.clear(); all data is flushed again.

FileStream seems to find an unexisting file

I have this code:
public static string GetUserEmail()
{
string path = Application.StartupPath + "\\mail.txt";
MessageBox.Show(path);
string adres = String.Empty;
if (File.Exists(path))
{
using (StreamReader sr = new StreamReader(path))
{
adres = sr.ReadLine();
}
}
else
{
using (FileStream fs = File.Create(path))
{
using (StreamReader sr = new StreamReader(path))
{
adres = sr.ReadLine();
}
}
}
MessageBox.Show(adres);
return adres;
}
I checked the ApplicationPath with the MessageBox.Show(); as you can see, go there and delete the file, re-launch the app, and it still reads the previous line . I uninstall the app re-install and still seems to find the file and read the same line I have entered in the very first installation. I searched windows, the whole C drive, there is no mail.txt and it still finds the mail.txt and reads the line (a email address, used to identify the user)
What can it be? aliens?
Firstly, which code route does the program take? The one where the file is created, or the one where the existing file is read?
Try placing a breakpoint just before the existance of the file is checked and then go and check at that point if the file exists or not.
Do you have any code elsewhere that creates and writes the file, as part of the application startup?
Otherwise, its definitely aliens.

Resources