I have issue with iTextSharp. Let's assume I have two rows of fields in PDF file (the file is given and I don't know how was created)
Row 1:
data[0].#subform[0].Tabella1[0].Riga2[0].DATA[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAINIPM[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAINILM[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAENDLM[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAENDAM[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAINIPP[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAINILP[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAENDLP[0]
data[0].#subform[0].Tabella1[0].Riga2[0].ORAENDAP[0]
Row 2:
data[0].#subform[0].Tabella1[0].Riga3[0].DATA[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAINIPM[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAINILM[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAENDLM[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAENDAM[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAINIPP[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAINILP[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAENDLP[0]
data[0].#subform[0].Tabella1[0].Riga3[0].ORAENDAP[0]
I read this fields using below code:
String newFile = source.Insert(source.Length - 4, "newModyfiy");
using (FileStream outFile = new FileStream(newFile, FileMode.Create))
{
PdfReader pdfReader = new PdfReader(source);
foreach (KeyValuePair<String, AcroFields.Item> kvp in pdfReader.AcroFields.Fields)
{
int fileType = pdfReader.AcroFields.GetFieldType(kvp.Key);
string filedValue = pdfReader.AcroFields.GetField(kvp.Key);
string transFileName = pdfReader.AcroFields.GetTranslatedFieldName(kvp.Key);
textBox1.Text = textBox1.Text + fileType.ToString() + " " + filedValue + " " + transFileName + Environment.NewLine;
}
pdfReader.Close();
}
I am getting for both rows values of the first row only. My target is to write values to those fields and save new file. When I use:
PdfStamper pdfStamper = new PdfStamper(pdfReader, new FileStream(newFile, FileMode.Create), '\0', true);
I always overwrite values of first row (when I try to set value in second row it appears in first). If I change the last parameter PdfStamper to false it writes fileds correctlly but file is not editable manually.
Is it a matter of pdf file? Is there a way to read and then write values to proper fileds?
I have spent on this few days and could not find reason of this strange behaviour.
Any small help or even clue will be appereciated.
Edit:
I add mentioned PDF file.
https://ufile.io/mwni5
I have deleted some object but general structure is kept.
Related
I am reading in a 17-column CSV file into a database.
once in a while the file has a "less then 17-column" row.
I am trying to ignore the row, but even when all columns are set to ignore, I can't ignore that row and the package fails.
How to ignore those rows?
Solution Overview
you can do this by adding one Flat File Connection Manager add only one column with Data type DT_WSTR and a length of 4000 (assuming it's name is Column0) - So all column are considered as one big column
In the Dataflow task add a Script Component after the Flat File Source
In mark Column0 as Input Column and Add 17 Output Columns
In the Input0_ProcessInputRow method split Column0 by delimiter, Then check if the length of array is = 17 then assign values to output columns, Else ignore the row.
Detailed Solution
Add a Flat file connection manager, Select the text file
Go to the Advanced Tab, Delete all Columns except one Column
Change the datatype of the remianing Column to DT_WSTR and length = 4000
Add a DataFlow Task
Inside the Data Flow Task add a Flat File Source, Script Component and OLEDB Destination
In the Script Component Select Column0 as Input Column
Add 17 Output Columns (the optimal output columns)
Change the OutputBuffer SynchronousInput property to None
Select the Script Language to Visual Basic
In the Script Editor write the following Script
Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)
If Not Row.Column0_IsNull AndAlso
Not String.IsNullOrEmpty(Row.Column0.Trim) Then
Dim strColumns As String() = Row.Column0.Split(CChar(";"))
If strColumns.Length <> 17 Then Exit Sub
Output0Buffer.AddRow()
Output0Buffer.Column = strColumns(0)
Output0Buffer.Column1 = strColumns(1)
Output0Buffer.Column2 = strColumns(2)
Output0Buffer.Column3 = strColumns(3)
Output0Buffer.Column4 = strColumns(4)
Output0Buffer.Column5 = strColumns(5)
Output0Buffer.Column6 = strColumns(6)
Output0Buffer.Column7 = strColumns(7)
Output0Buffer.Column8 = strColumns(8)
Output0Buffer.Column9 = strColumns(9)
Output0Buffer.Column10 = strColumns(10)
Output0Buffer.Column11 = strColumns(11)
Output0Buffer.Column12 = strColumns(12)
Output0Buffer.Column13 = strColumns(13)
Output0Buffer.Column14 = strColumns(14)
Output0Buffer.Column15 = strColumns(15)
Output0Buffer.Column16 = strColumns(16)
End If
End Sub
Map the Output Columns to the Destination Columns
C# Solution for Loading CSV and skip rows that don't have 17 columns:
Use a Script Component:
On input/output screen add all of your outputs with data types.
string fName = #"C:\test.csv" // Full file path: it should reference via variable
string[] lines = System.IO.File.ReadAllLines(fName);
//add a counter
int ctr = 1;
foreach(string line in lines)
{
string[] cols = line.Split(',');
if(ctr!=1) //Assumes Header row. elim if 1st row has data
{
if(cols.Length == 17)
{
//Write out to Output
Output0Buffer.AddRow();
Output0Buffer.Col1 = cols[0].ToString(); //You need to cast to data type
Output0Buffer.Col2 = int.Parse(cols[1]) // example to cast to int
Output0Buffer.Col3 = DateTime.Parse(cols[2]) // example of datetime
... //rest of Columns
}
//optional else to handle skipped lines
//else
// write out line somewhere
}
ctr++; //increment counter
}
This is for #SidC comment in my other answer.
This lets you work with multiple files:
//set up variables
string line;
int ctr = 0;
string[] files = System.IO.Directory.GetFiles(#"c:/path", "filenames*.txt");
foreach(string file in files)
{
var str = new System.IO.StreamReader(file);
while((line = str.ReadLine()) != null)
{
// Work with line here similar to the other answer
}
}
I am trying to automate download csv file and read data from there.
I tried with:
CSVReader reader = new CSVReader(new FileReader("D:\\File\\1453.csv"));
String [] csvCell;
//while loop will be executed till the last line In CSV.
while ((csvCell = reader.readNext()) != null) {
String FName = csvCell[0];
String LName = csvCell[1];
String Email = csvCell[2];
String Mob = csvCell[3];
String company = csvCell[4];
but the problem is here I need to give the file name while mentioning the path, here I can't write the name as it is getting changed at runtime after downloading. Please suggest
If the filename is same as the download link (even if it is partial), you can get the link from the download button or whatever element it is using getAttribute("href") and then you can use it to form the filename to read from.
String fileName = driver.findElement("<download_locator>").getAttribute("href")
CSVReader reader = new CSVReader(new FileReader("D:\\File\\" + fileName));
String [] csvCell;
//while loop will be executed till the last line In CSV.
while ((csvCell = reader.readNext()) != null) {
String FName = csvCell[0];
String LName = csvCell[1];
String Email = csvCell[2];
String Mob = csvCell[3];
String company = csvCell[4];
Have you tried this? And passing in the parameter from a method?
CSVReader reader = new CSVReader(new FileReader("D:\\File\\" + provideFileName));
I have the code below that splits a text file from IsolatedStorage, populates an Array with the data, sorts it, and then assigns it as the source for a ListPicker:
var splitFile = fileData.Split(';');
string[] testArray = splitFile;
Array.Sort<string>(testArray);
testLocationPicker.ItemsSource = testArray;
However, it doesn't seem to populating the array correctly and the sorting doesn't appear to be working as expected either.
The testArray[0] is blank, when it should be populated. When the output is shown the entry that should be at [0] appears at the bottom.
BEFORE SORTING:
AFTER SORTING:
It's only in sorting the array that it seems to screw up the order.
UPDATE: I tried the suggested:
var splitFile = fileData.Split(new[] { ';' }, StringSplitOptions.RemoveEmptyEntries);
string[] testArray = splitFile;
Array.Sort<string>(testArray);
testLocationPicker.ItemsSource = testArray;
This still results in the second screenshot, above.
When the app first ever runs I do this:
StringBuilder sb = new StringBuilder(); // Use a StringBuilder to construct output.
var store = IsolatedStorageFile.GetUserStoreForApplication(); // Create a store
store.CreateDirectory("testLocations"); // Create a directory
IsolatedStorageFileStream rootFile = store.CreateFile("locations.txt"); // Create a file in the root.
rootFile.Close(); // Close File
string[] filesInTheRoot = store.GetFileNames(); // Store all files names in an array
Debug.WriteLine(filesInTheRoot[0]); // Show first file name retrieved (only one stored at the moment)
string filePath = "locations.txt";
if (store.FileExists(filePath)) {
Debug.WriteLine("Files Exists");
StreamWriter sw =
new StreamWriter(store.OpenFile(filePath,
FileMode.Open, FileAccess.Write));
Debug.WriteLine("Writing...");
sw.WriteLine("Chicago, IL;");
sw.WriteLine("Chicago, IL (Q);");
sw.WriteLine("Dulles, VA;");
sw.WriteLine("Dulles, VA (Q);");
sw.WriteLine("London, UK;");
sw.WriteLine("London, UK (Q);");
sw.WriteLine("San Jose, CA;");
sw.WriteLine("San Jose, CA (Q);");
sw.Close();
Debug.WriteLine("Writing complete");
}
Then when I add to the file I do this:
StringBuilder sb = new StringBuilder(); // Use a StringBuilder to construct output.
var store = IsolatedStorageFile.GetUserStoreForApplication(); // Create a store
string[] filesInTheRoot = store.GetFileNames(); // Store all files names in an array
Debug.WriteLine(filesInTheRoot[0]); // Show first file name retrieved (only one stored at the moment)
byte[] data = Encoding.UTF8.GetBytes(locationName + ";");
string filePath = "locations.txt";
if (store.FileExists(filePath))
{
using (var stream = new IsolatedStorageFileStream(filePath, FileMode.Append, store))
{
Debug.WriteLine("Writing...");
stream.Write(data, 0, data.Length); // Semi Colon required for location separation in text file
stream.Close();
Debug.WriteLine(locationName + "; added");
Debug.WriteLine("Writing complete");
}
}
I'm splitting using a ";" could this be an issue?
There's no problem with the sort: 'space' is considered to come before 'a', so it appears on top of the list. The real problem is: why do you have an empty entry to begin with?
My guess is that, when creating the file, you're separating every entry with ;, including the last one. Therefore, when parsing the data with the string.Split method, you're left with an empty entry at the end of your array.
An easy way to prevent that is to use an overload of the string.Split method that filters empty entries:
var splitFile = fileData.Split(new[] { ';' }, StringSplitOptions.RemoveEmptyEntries);
I went a different way using IsolatedStorageSetting and storing Arrays/Lists to do what I wanted.
I'm trying to find a way to count my columns coming from a Flat File. Actually, all my columns are concatened in a signe cell, sepatared with a '|' ,
after various attempts, it seems that only a script task can handle this.
Does anyone can help me upon that ? I've shamely no experience with script in C# ou VB.
Thanks a lot
Emmanuel
To better understand, below is the output of what I want to achieve to. e.g a single cell containing all headers coming from a FF. The thing is, to get to this result, I appended manually in the previous step ( derived column) all column names each others in order to concatenate them with a '|' separator.
Now , if my FF source layout changes, it won't work anymore, because of this manualy process. So I think I would have to use a script instead which basically returns my number of columns (header ) in a variable and will allow to remove the hard coded part in the derived column transfo for instance
This is an very old thread; however, I just stumbled on a similar problem. A flat file with a number of different record "formats" inside. Many different formats, not in any particular order, meaning you might have 57 fields in one line, then 59 in the next 1000, then 56 in the next 10000, back to 57... well, think you got the idea.
For lack of better ideas, I decided to break that file based on the number of commas in each line, and then import the different record types (now bunched together) using SSIS packages for each type.
So the answer for this question is there, with a bit more code to produce the files.
Hope this helps somebody with the same problem.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace OddFlatFile_Transformation
{
class RedistributeLines
{
/*
* This routine opens a text file and reads it line by line
* for each line the number of "," (commas) is counted
* and then the line is written into a another text file
* based on that number of commas found
* For example if there are 15 commas in a given line
* the line is written to the WhateverFileName_15.Ext
* WhaeverFileName and Ext are the same file name and
* extension from the original file that is being read
* The application tests WhateverFileName_NN.Ext for existance
* and creates the file in case it does not exist yet
* To Better control splited records a sequential identifier,
* based on the number of lines read, is added to the beginning
* of each line written independently of the file and record number
*/
static void Main(string[] args)
{
// get full qualified file name from console
String strFileToRead;
strFileToRead = Console.ReadLine();
// create reader & open file
StreamReader srTextFileReader = new StreamReader(strFileToRead);
string strLineRead = "";
string strFileToWrite = "";
string strLineIdentifier = "";
string strLineToWrite = "";
int intCountLines = 0;
int intCountCommas = 0;
int intDotPosition = 0;
const string strZeroPadding = "00000000";
// Processing begins
Console.WriteLine("Processing begins: " + DateTime.Now);
/* Main Loop */
while (strLineRead != null)
{
// read a line of text count commas and create Linde Identifier
strLineRead = srTextFileReader.ReadLine();
if (strLineRead != null)
{
intCountLines += 1;
strLineIdentifier = strZeroPadding.Substring(0, strZeroPadding.Length - intCountLines.ToString().Length) + intCountLines;
intCountCommas = 0;
foreach (char chrEachPosition in strLineRead)
{
if (chrEachPosition == ',') intCountCommas++;
}
// Based on the number of commas determined above
// the name of the file to be writen to is established
intDotPosition = strFileToRead.IndexOf(".");
strFileToWrite = strFileToRead.Substring (0,intDotPosition) + "_";
if ( intCountCommas < 10)
{
strFileToWrite += "0" + intCountCommas;
}
else
{
strFileToWrite += intCountCommas;
}
strFileToWrite += strFileToRead.Substring(intDotPosition, (strFileToRead.Length - intDotPosition));
// Using the file name established above the line captured
// during the text read phase is written to that file
StreamWriter swTextFileWriter = new StreamWriter(strFileToWrite, true);
strLineToWrite = "[" + strLineIdentifier + "] " + strLineRead;
swTextFileWriter.WriteLine (strLineToWrite);
swTextFileWriter.Close();
Console.WriteLine(strLineIdentifier);
}
}
// close the stream
srTextFileReader.Close();
Console.WriteLine(DateTime.Now);
Console.ReadLine();
}
}
}
Please refer my answers in the following Stack Overflow questions. Those answers might give you an idea of how to load a flat file that contains varying number of columns.
Example in the following question reads a file containing data separated by special character Ç (c-cedilla). In your case, the delimiter is Vertical Bar (|)
UTF-8 flat file import to SQL Server 2008 not recognizing {LF} row delimiter
Example in the following question reads an EDI file that contains different sections with varying number of columns. The package reads the file loads it accordingly with parent-child relationships into an SQL table.
how to load a flat file with header and detail parent child relationship into SQL server
Based on the logic used in those answers, you can also count the number of columns by splitting the rows in the file by the column delimiter (Vertical Bar |).
Hope that helps.
I have images of various formats (.png, .jpg, .bmp, etc.) stored as compressed text in a text column in a SQL Server 2005 table. I need to read the row, unzip the image and store it in an image column in another table.
I am using the SharpZip library, and all of the examples deal with file sources and destinations. I can't find anything that covers unzipping from a variable to another variable. A code snippet illustrating this or a link to a relevant resource would be much appreciated.
EDIT: A bit more information - the data is stored in a TEXT column. It appears as follows (text column abbreviated for display):
ImageID ImageData
1 FORMAT-ZIPV3 UEsDBBQAAAAIAOV6wzxdTnDvshs...
2 FORMAT-ZIPV3 UEsDBBQAAAAIAAF2yjxGncjOLgA...
3 FORMAT-ZIPV3 UEsDBBQAAAAIAKd6yjyjnQNr6gg...
4 FORMAT-ZIPV3 UEsDBBQAAAAIALdNyzyrPC8EMJw...
5 FORMAT-ZIPV3 UEsDBBQAAAAIAA1rOD1nZY1t0f0...
6 FORMAT-ZIPV3 UEsDBBQAAAAIANZplj2seyJ+VmM...
7 FORMAT-ZIPV3 UEsDBBQAAAAIAC5vhD27LPbPcv8...
8 FORMAT-ZIPV3 UEsDBBQAAAAIAK1qKz5DJNH3xMg...
9 FORMAT-ZIPV3 UEsDBBQAAAAIAHVkEztC3th/9hs...
10 FORMAT-ZIPV3 UEsDBBQAAAAIAEtXKz7DXHUdvow...
What I know for certain is that the images were compressed at some point in the process using SharpZip before being inserted into the table. It appears that the format information was added to the beginning of the data prior to inserting.
Looking at this data, would anyone have any insight on how this image data has been manipulated? Again, I need to get the uncompressed image data into a column of a data type conducive to reading for display on a web page.
EDIT: Ok, I'm stumped. Executing the following code produces the error, "Failed to convert parameter value from a Int32 to a Byte[]". It appears to be placing the length of the byte array into the byte array's value...
commandUncompressed.Connection = connectionUncompressed;
commandUncompressed.Parameters.Add("#Image_k", SqlDbType.VarChar, 10);
commandUncompressed.Parameters.Add("#ImageContents", SqlDbType.Image);
commandUncompressed.CommandText = sqlSaveImage;
connectionUncompressed.Open();
reader = command.ExecuteReader();
if (reader.HasRows)
{
while (reader.Read())
{
Console.WriteLine(reader["Image_k"].ToString()); // Merely for testing
String format = reader["ImageContents_Compressed"].ToString().Substring(0, 12);
var offset = 13; //"FORMAT-ZIPV3 ".Length;
var s = reader["ImageContents_Compressed"].ToString().Substring(offset);
var bytes = Convert.FromBase64String(s);
if (format == "FORMAT-ZIPV2 ")
{
bytes = ConvertStringToBytes(s); // Not a Base-64 encoded string? External conversion function utilized.
}
using (var zis = new ZipInputStream(new MemoryStream(bytes)))
{
ZipEntry zipEntry = zis.GetNextEntry(); // Doesn't seem to work unless an entry has been referenced
byte[] buffer = new byte[zis.Length];
commandUncompressed.Parameters["#Image_k"].Value = reader["Image_k"].ToString();
commandUncompressed.Parameters["#ImageContents"].Value = zis.Read(buffer, 0, buffer.Length);
commandUncompressed.ExecuteNonQuery();
}
}
}
It appears to be reading the data from the source text column just fine. I just cannot figure out how to get that into the image type parameter. The value for buffer variable shows the length of the byte array, rather than the actual bytes. Maybe that's what the value property typically shows for byte arrays? I'm so close and yet so far away. :/
EDIT: Ok, I'm a knucklehead. I made the following correction, and it works!
zis.Read(buffer, 0, buffer.Length)
commandUncompressed.Parameters["#ImageContents"].Value = buffer;
At this point I am only able to process FORMAT-ZIPV3 data, as I haven't figured out how to decode the FORMAT-ZIP2 strings yet. Following is a sampling of the V2 data. If anyone is able to determine the encoding, let me know. Would it be different if zipped using BZIP instead of ZIP format?
ImageID ImageData
1 FORMAT-ZIPV2 504B03041400020008005157422A2E25FDBAF26701008D6901000E...
2 FORMAT-ZIPV2 504B03041400020008009159422A7FC94BA2B2540500D35705000E...
3 FORMAT-ZIPV2 504B0304140002000800685A422A0CAA51F4473A0600B97206000E...
4 FORMAT-ZIPV2 504B03041400020008001D5D422A770BD3ED201902002C4A02000E...
5 FORMAT-ZIPV2 504B0304140002000800325E422A4B6C2FB4045001001C6E01000E...
6 FORMAT-ZIPV2 504B03041400020008006F72422A5F793AC1A1F00200ECF302000E...
7 FORMAT-ZIPV2 504B0304140002000800D572422A1B348A731DE5000085EB00000E...
8 FORMAT-ZIPV2 504B03041400020008003D73422A8AEBB7F855640300DD1B04000E...
9 FORMAT-ZIPV2 504B03041400020008006368D528C5D0A6BA794900004A2502000E...
10 FORMAT-ZIPV2 504B03041400020008008E5B6C2A2D9E9C33D7AF05005CEC05000E...
In response to a similar question, someone on sqlmonster.com provided a nifty VarBinaryStream class. It works with a column type of varbinary(max).
If your data is stored in a varbinary(max), and is in zip format, you could use that class to instantiate a VarBinaryStream, then instantiate a ZipInputStream around that, and ba-da-boom, you're there. Just read from the ZipInputStream.
In C# it might look like this
using (var imageSrc = new VarBinarySource(connection,
"Table.Name",
"Column",
"KeyColName",
1))
{
using (var s = new VarBinaryStream(imageSrc))
{
using(var zis = new ZipInputStream(s))
{
....
}
}
}
If the images are small, then you probably wouldn't want all this streaming stuff. If the column is a binary(n) or a varbinary(n) where n is less than 8000, just use the SqlBinary type and read in all the data into memory, then instantiate a MemoryStream around that. Simpler. In VB.NET it looks something like this:
Dim bytes as Bytes()
bytes = dr.GetSqlBinary(columnNumber)
Using ms As New MemoryStream(bytes)
Using zis As New ZipInputStream(ms)
...
End Using
End Using
Finally, I'm going to question the wisdom of applying zip compression to .jpg images, and similar. The jpg format is already compressed; compressing it again before putting the data into SQL Server won't cause the data to become appreciably smaller. It only increases processing time. If possible, I'd suggest you reconsider your design for storage of compressed images.
ok, with the update you provided, containing the data format, you can draw some conclusions.
The data is an actual string. Suspecting that it is a Base64-encoded string, I did a small test and used Convert.ToBase64String() on a byte stream that contains a zip file. It looks like this: UEsDBBQAAAAIAJJyYyk3M56F+QIAA...
Aha! you have a base64-encoded (string) version of the byte data for a bonafide zip file. To decode it, strip the prefix and then use FromBase64String() to get the byte array, insert into a MemoryStream, then read it with ZipInputStream.
something like this:
var offset = "FORMAT-ZIPV3 ".Length();
var s = sqlReader["CompressedImage"].ToString().Substring(offset);
var bytes = Convert.FromBase64String(s);
using (var zis = new ZipInputStream(new MemoryStream(bytes)))
{
...
zis.Read(...);
...
}
If the data is "really long", you're going to want to stream it out of that table, rather than just read it into a big string and convert it. I don't know how large text columns can be, but supposing that it could be 500mb, you don't want a 500mb string, and you don't want to do a conversion of a 500mb string with Convert.FromBase64String(). In that case You need to use a Base64Stream, or the FromBase64Transform class in the System.Security.Cryptography namespace.
Editorial comment. It is sort of backwards to zip-compress image data. The images are probably compressed already. But to compound that backwardsness by then doing a base64 encode, thereby expanding the data... ??? That is triple backwards. That makes noooooo sense at all. I understand that's how your vendor supplied it.
Ok, with your furhter update, using this as the format:
ImageID ImageData
1 FORMAT-ZIPV2 504B03041400020008005157422A2E25FDBAF26701008D6901000E...
2 FORMAT-ZIPV2 504B03041400020008009159422A7FC94BA2B2540500D35705000E...
That data is still zipfile data, but it is encoded as simple hex digits. You need to convert that to a byte array. Here's some code to do it.
public static class ConvertEx
{
static readonly String prefix= "FORMAT-ZIPV2 ";
public static string ToHexString(byte[] b)
{
System.Text.StringBuilder sb1 = new System.Text.StringBuilder();
int i = 0;
for (i = 0; i < b.Length; i++)
{
sb1.Append(System.String.Format("{0:X2}", b[i]));
}
return sb1.ToString().ToLower();
}
public static byte[] ToByteArray(string s)
{
if (s.StartsWith(prefix))
{
System.Console.WriteLine("removing prefix");
s = s.Substring(prefix.Length);
}
s= s.Trim(); // whitespace
System.Console.WriteLine("length: {0}", s.Length);
var r= new byte[s.Length/2];
for (int i = 0; i < s.Length; i+=2)
{
r[i/2] = (byte) Convert.ToUInt32(s.Substring(i,2), 16);
}
return r;
}
}
You can use that this way:
string s = GetStringContentFromDatabase()
var decoded = ConvertEx.ToByteArray(s);
using (var ms = new MemoryStream(decoded))
{
// use DotNetZip to read the zip file
// SharpZipLib is something similar...
using (var zip = ZipFile.Read(ms))
{
// print out the list of entries in the zipfile
foreach (var e in zip)
{
System.Console.WriteLine("{0}", e.FileName);
}
}
}
The examples on the SharpZip Wiki use Stream objects - while the sample does use a File, you could easily use a MemoryStream object here and the sample would work the same.