How to Read a Text file Using Actionscript 3? - arrays

I am trying to read a text file in my air project. It is actually the config file used by TinkerProxy. I have the following so far:
//Read settings from TinkerProxy Config File
var TextFileLoader:URLLoader = new URLLoader();
var ArrayOfLines:Array;
TextFileLoader.addEventListener(Event.COMPLETE, onLoaded);
TextFileLoader.load(new URLRequest("/tinkerproxy-2_0/serproxy.cfg"));
//TextFileLoader.dataFormat = URLLoaderDataFormat.VARIABLES;
function onLoaded(e:Event):void {
ArrayOfLines = e.target.data.split(/\r/);
trace(e.target.data);
}
trace(ArrayOfLines[0]);
What I'm really trying to do is find the 'net_port1=5331' entry and store '5331' in a variable.
Here is a sample of the text file:
# Generated by TinkerProxy Configurator
#
# Timeout in seconds
# 0 means infinite, no timeout
timeout=0
newlines_to_nils=false
comm_ports=1
serial_device1=COM1
net_port1=5331
comm_baud1=9600
comm_databits1=8
comm_stopbits1=1
comm_parity1=none
The file is autogenerated so I can not edit it (or rather I'd want to read it as it is generated.)
I'm able to see the data via trace(e.target.data) but I cannot access the data via trace(ArrayOfLines[0]); for instance.
What am I missing?
Thanks in advance.

You probably need to split on \n (Unix) or \r\n (Windows), not \r.
Usually when loading a text file from the filesystem and breaking into lines, I normalize line endings by doing this:
var lines:Array = text.replace(/\r\n/g, "\n").split("\n");
Then you can iterate over the lines and decode each line as desired. The file appears to be akin to .properties format, for which there is no built in parser in AS3 (like XML, JSON, or URLVariables) but it's a pretty simple format. For example, this:
var props:Object = {};
for each(var line:String in lines){
// skip blank lines and comment lines
if(line == "" || line.charAt(0) == "#")
continue;
var arr:Array = line.split("=");
if(arr.length == 2)
props[arr[0]] = arr[1];
}
trace(JSON.stringify(props, null, 2))
Outputs this:
{
"comm_parity1": "none",
"comm_ports": "1 ",
"newlines_to_nils": "false",
"comm_baud1": "9600",
"serial_device1": "COM1",
"comm_databits1": "8",
"timeout": "0",
"comm_stopbits1": "1",
"net_port1": "5331"
}
Which allows you to access properties by name:
trace(props.net_port1); // "5331"
(Note that all values are strings, so for example newlines_to_nils is not false, it is "false".)
Alternatively, you could search for the key you are looking for and extract just the data you want:
var key:String = "net_port1=";
var index:int = text.indexOf(key);
if(index != -1){
// extract text after the desired search key
var value:String = text.substring(index + key.length);
// parseInt will read until it hits a non-numeric character
var net_port1:int = parseInt(value);
trace(net_port1); // 5331
}

Here is the solution that worked for me. Thanks again to Aaron for his answer on properties. I may use that in the future.
//Read settings from TinkerProxy Config File
var TextFileLoader:URLLoader = new URLLoader();
var ArrayOfLines:Array;
var Port:int;
var COM:int;
TextFileLoader.addEventListener(Event.COMPLETE, onLoaded);
TextFileLoader.load(new URLRequest("/tinkerproxy-2_0/serproxy.cfg"));
function findSubstring(array:Array, string:String):int {
for(var i:int = 0; i < array.length; i++){
if(array[i].indexOf(string) > -1){
return i; //Index of Substring
}
}
return -1; //Not Found
}
function onLoaded(e:Event):void {
ArrayOfLines = e.target.data.split(String.fromCharCode(13));
if(findSubstring(ArrayOfLines, "net_port") > -1){
Port = Number(ArrayOfLines[findSubstring(ArrayOfLines, "net_port")].split("=")[1]);
}
else{
Port = 5331; //Default if not port is found.
}
if(findSubstring(ArrayOfLines, "serial_device1") > -1){
COM = Number(ArrayOfLines[findSubstring(ArrayOfLines, "serial_device1")].split("serial_device1=COM")[1]);
}
else{
COM = 1; //Default if not port is found.
}
trace("COM: " + COM + " Port: " + Port);
}

Related

How to write in existing excel file using angular or node js

I am stuck actually i am working in a mean stack and i have a requirement in my project to write data in existing micros enable validated excel sheet. I do a lot of google but i did't find any node or angular module those are fulfill my requirement. Everyone given me option to create new file no one give me option update existing excel. Its really strange
Below is my requirement step by step
I have an micro enable excel (.xlsm)
Now i have to open it and write some data in angular or node js.
After that threw it to the user for download.
Please help me anyone
The SheetJS API allow you convert Microsoft Excel (.xls / .xlsx) and OpenDocument (.ods) formats to JSON stream or files and import/export from/to MongoDB or MongooseJS. Just study the simple and easy API.
In Github you can find the documentation, tutorials and code examples.
Site: http://sheetjs.com/
Project: https://github.com/SheetJS/js-xlsx
Interactive Demo: http://oss.sheetjs.com/js-xlsx/
Use the code example below, but just before run npm install xlsx and put the chunk of code to open the .xlsm file: var workbook = XLSX.readFile('test.xlsx');.
/* require XLSX */
var XLSX = require('xlsx')
function datenum(v, date1904) {
if(date1904) v+=1462;
var epoch = Date.parse(v);
return (epoch - new Date(Date.UTC(1899, 11, 30))) / (24 * 60 * 60 * 1000);
}
function sheet_from_array_of_arrays(data, opts) {
var ws = {};
var range = {s: {c:10000000, r:10000000}, e: {c:0, r:0 }};
for(var R = 0; R != data.length; ++R) {
for(var C = 0; C != data[R].length; ++C) {
if(range.s.r > R) range.s.r = R;
if(range.s.c > C) range.s.c = C;
if(range.e.r < R) range.e.r = R;
if(range.e.c < C) range.e.c = C;
var cell = {v: data[R][C] };
if(cell.v == null) continue;
var cell_ref = XLSX.utils.encode_cell({c:C,r:R});
if(typeof cell.v === 'number') cell.t = 'n';
else if(typeof cell.v === 'boolean') cell.t = 'b';
else if(cell.v instanceof Date) {
cell.t = 'n'; cell.z = XLSX.SSF._table[14];
cell.v = datenum(cell.v);
}
else cell.t = 's';
ws[cell_ref] = cell;
}
}
if(range.s.c < 10000000) ws['!ref'] = XLSX.utils.encode_range(range);
return ws;
}
/* original data */
var data = [[1,2,3],[true, false, null, "sheetjs"],["foo","bar",new Date("2014-02-19T14:30Z"), "0.3"], ["baz", null, "qux"]]
var ws_name = "SheetJS";
function Workbook() {
if(!(this instanceof Workbook)) return new Workbook();
this.SheetNames = [];
this.Sheets = {};
}
var wb = new Workbook(), ws = sheet_from_array_of_arrays(data);
/* add worksheet to workbook */
wb.SheetNames.push(ws_name);
wb.Sheets[ws_name] = ws;
/* write file */
XLSX.writeFile(wb, 'test.xlsx');
Currently office file is zip archive with xml files. So, in Node.js you can extract data by e.g. node-zip, changed by e.g. xml2js and zipped back.

What is the absolute fastest way to compare one large set of data to another? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
You have a big list of unique items (hundreds of thousands of lines). You want to see if those items exist in another set of data. That other set of data is just a file with items line by line, and are also a unique set of data. You can put any data in a db, use any programming language, etc.
What do you do to compare these the fastest? Only constraints are that the hardware is a normal server, not a db server. One spindle max.
C? Implementing sorting algorithms? DB for indexing etc?
Admins took out the answer I went with "because the question is too broad": Bloom filters in python. It's really easy to implement with python's bloom filter library.
If your "test" file has a resonable size, a quick solution is to build a hash map for every entry in that file. A C# solution (runs in Big O ( N )) is this:
public static bool SetIsPresentIn(string firstFileLocation, string secondFileLocation)
{
HashSet<string> set = new HashSet<string> ();
using (var sr = new FileStream(firstFileLocation, FileMode.Open, FileAccess.Read))
{
using (var reader = new StreamReader(sr))
{
while (reader.EndOfStream == false )
{
var text = reader.ReadLine();
set.Add(text);
}
}
}
// iterating through the first one!
using (var secondFile = new FileStream(secondFileLocation, FileMode.Open, FileAccess.Read))
{
using (var reader = new StreamReader(secondFile))
{
while (reader.EndOfStream == false)
{
var line = reader.ReadLine();
// perform a lookup!
if (set.Remove(line) && set.Count == 0)
return true;
}
}
}
return set.Count == 0;
}
Otherwise I would do a clever thing: split your "test" file in file partitions: each partition name matches a hash code for each line. When iterating over second file, just create a hash code and search inside the coresponding partition that was built from the first file!
Example:
public static bool SetIsPresentInUsingFilePartitions(string firstFileLocation, string secondFileLocation, string partitionsRootLocation)
{
Dictionary<int, StreamWriter> partitionWriters = new Dictionary<int, StreamWriter>();
Dictionary<int, string> locations = new Dictionary<int, string>();
using (var sr = new FileStream(secondFileLocation, FileMode.Open, FileAccess.Read))
{
using (var reader = new StreamReader(sr))
{
while (reader.EndOfStream == false)
{
var text = reader.ReadLine();
var hCode = text.GetHashCode();
var fileName = Path.Combine(partitionsRootLocation, hCode.ToString ());
if (false == partitionWriters.ContainsKey(hCode))
{
var fs = new FileStream(fileName, FileMode.Create, FileAccess.ReadWrite);
partitionWriters[hCode] = new StreamWriter(fs);
locations[hCode] = fileName;
}
partitionWriters[hCode].WriteLine(text);
}
}
}
// close writers
foreach (var item in partitionWriters)
item.Value.Dispose();
using (var sr = new FileStream(firstFileLocation, FileMode.Open, FileAccess.Read))
{
using (var reader = new StreamReader(sr))
{
while (reader.EndOfStream == false)
{
var line = reader.ReadLine();
var hCode = line.GetHashCode();
string location;
if (false == locations.TryGetValue(hCode, out location))
{
return false; // tere's a line that is not found in the second file!
}
var found = false;
using (var file = new FileStream(location, FileMode.Open, FileAccess.Read))
{
using (var fs = new StreamReader(file))
{
while (fs.EndOfStream == false)
{
var firstFileLine = fs.ReadLine();
if (line == firstFileLine)
{
found = true;
break;
}
}
}
}
if (!found)
return false;
}
}
}
return true;
}
You could use a simple bash script:
First sort the lists
$ sort list1.txt > list1.sorted.txt
$ sort list2.txt > list2.sorted.txt
Then do a join to find the common elements for both lists:
$ join -1 1 -2 1 list1.sorted.txt list2.sorted.txt
This should be relatively fast and has a low memory consumption.

Reading and parsing text file exception-C#

I am parsing big text files and it's working fine for some time but after few minutes it give me exception (An unhandled exception of type 'System.UnauthorizedAccessException' occurred in System.Core.dll
Additional information: Access to the path is denied.)
I get exception on below mention line.
accessor = MemoryMapped.CreateViewAccessor(offset, length, MemoryMappedFileAccess.Read);
Below is my function
public static void CityStateZipAndZip4(string FilePath,long offset,long length,string spName)
{
try
{
long indexBreak = offset;
string fileName = Path.GetFileName(FilePath);
if (fileName.Contains(".txt"))
fileName = fileName.Replace(".txt", "");
System.IO.FileStream file = new System.IO.FileStream(#FilePath, FileMode.Open,FileAccess.Read, FileShare.Read );
Int64 b = file.Length;
MemoryMappedFile MemoryMapped = MemoryMappedFile.CreateFromFile(file, fileName, b, MemoryMappedFileAccess.Read, null, HandleInheritability.Inheritable, false);
using (MemoryMapped)
{
//long offset = 182; // 256 megabytes
//long length = 364; // 512 megabytes
MemoryMappedViewAccessor accessor = MemoryMapped.CreateViewAccessor(offset, length, MemoryMappedFileAccess.Read);
byte byteValue;
int index = 0;
int count = 0;
StringBuilder message = new StringBuilder();
do
{
if (indexBreak == index)
{
count = count + 1;
accessor.Dispose();
string NewRecord = message.ToString();
offset = offset + indexBreak;
length = length + indexBreak;
if (NewRecord.IndexOf("'") != -1)
{ NewRecord = NewRecord.Replace("'", "''"); }
// string Sql = "insert into " + DBTableName + " (ID, DataString) values( " + count + ",'" + NewRecord + "')";
string Code = "";
if (spName == AppConfig.sp_CityStateZip)
{
Code = NewRecord.Trim().Substring(0, 1);
}
InsertUpdateAndDeleteDB(spName, NewRecord.Trim (), Code);
accessor = MemoryMapped.CreateViewAccessor(offset, length, MemoryMappedFileAccess.Read);
message = new StringBuilder();
index = 0;
//break;
}
byteValue = accessor.ReadByte(index);
if (byteValue != 0)
{
char asciiChar = (char)byteValue;
message.Append(asciiChar);
}
index++;
} while (byteValue != 0);
}
MemoryMapped.Dispose();
}
catch (FileNotFoundException)
{
Console.WriteLine("Memory-mapped file does not exist. Run Process A first.");
}
}
Somewhere deep in resource processing code we have something like this:
try {
// Try loading some strings here.
} catch {
// Oops, could not load strings, try another way.
}
Exception is thrown and handled already, it would never show up in your application. The only way to see it is to attach debugger and observe this message.
As you could see from the code, it has nothing to do with your problem. The real problem here is what debugger shows you something you should not see.
Run the solution without debugging mode and it works fine.
This exception means that your program does not get Read access to the file from Windows.
Have you made sure that this file is not locked when your program tries to read it ?
For example, it could be a file that your own program is currently using.
If not, try to run your program as an Administrator and see if it makes a difference.

How to detect and separate concatenated files?

I am trying to find a method to separate two files that have been concatenated together using
copy /b file1+file2 file3.
I know the mime type and file type of at least one of the two files.
With the following csharp code you can do the split based on the fact that the zip file has the signature of 4 bytes that indicates the local file header. This code will break if the EXE has the same 4 bytes some where. If you want to conquer that you have to dig through the PE/COFF header to add up all section sizes
And NO, it is not very efficient to copy a stream byte by byte...
using(var fs = new FileStream(#"exeandzip.screwed", FileMode.Open))
{
var lfh = new byte[] { 0x50, 0x4b, 0x03, 0x04 }; /* zip local file header signature */
var match = 0;
var splitAt = 0;
var keep = new Queue<int>();
var b = fs.ReadByte();
using(var exe = new FileStream(
#"exeandzip.screwed.exe",
FileMode.Create))
{
while((b != -1) && (match<lfh.Length))
{ splitAt++;
if (b==lfh[match])
{
match++;
keep.Enqueue(b);
}
else
{
while(keep.Count>0)
{
exe.WriteByte((byte) keep.Dequeue());
}
exe.WriteByte((byte)b);
match=0;
}
b = fs.ReadByte();
}
}
if (match==lfh.Length && b!=-1)
{
keep.Enqueue(b);
splitAt = splitAt-lfh.Length;
Console.WriteLine(splitAt);
using(var zip = new FileStream(
#"exeandzip.screwed.zip",
FileMode.Create))
{
while(keep.Count>0)
{
zip.WriteByte((byte) keep.Dequeue());
}
b = fs.ReadByte();
while(b != -1)
{
zip.WriteByte((byte)b);
b = fs.ReadByte();
}
}
}
}
Or u can use foremost -i <input file> -o <output directory>
I've even split the apple webarchive format file in this way

JavaMail don't read MimeMultipart emails

if (contentType.contains("multipart")) {
// content may contain attachments
Multipart multiPart = (Multipart) message.getContent();
numberOfParts = multiPart.getCount();
for (int partCount = 0; partCount < numberOfParts; partCount++) {
BodyPart part = multiPart.getBodyPart(partCount);
String disposition = part.getDisposition();
InputStream inputStream = null;
if (disposition == null)
{
MimeBodyPart mbp = (MimeBodyPart) multiPart.getBodyPart(partCount);
if (mbp.getContent() instanceof MimeMultipart){
MimeMultipart mmp = (MimeMultipart) mbp.getContent();
messageContent = mmp.getBodyPart(0).getContent().toString();
//System.out.println("bodyContent " + bodyContent);
}
else
{
messageContent = multiPart.getBodyPart(partCount).getContent().toString();
}
}
else if (Part.ATTACHMENT.equalsIgnoreCase(part.getDisposition())) {
// this part is attachment
String fileName = part.getFileName();
attachFiles += fileName + ", ";
//part.saveFile(saveDirectory + File.separator + fileName);
}else if (Part.INLINE.equalsIgnoreCase(part.getDisposition())) {
// this part is attachment
String fileName = part.getFileName();
attachFiles += fileName + ", ";
// mbp.saveFile(saveDirectory + File.separator + fileName);
}
else {
// this part may be the message content
messageContent = part.getContent().toString();
}
}
if (attachFiles.length() > 1) {
attachFiles = attachFiles.substring(0, attachFiles.length() - 2);
}
} else if (contentType.contains("text/plain") || contentType.contains("text/html")) {
Object content = message.getContent();
if (content != null) {messageContent = content.toString(); }
}
And now this type of message text / plain, text / html gets well. The problem is the email multipart / related when the message has attachments and content is HTML, then gets some news and some not. I noticed that it is dependent on this line:
messageContent = mmp.getBodyPart (0). getContent (). toString ();
If instead of "0" is "partCount" gets all but one particular, if instead of "0" is "1" gets me this one specific and does not charge others. numberOfParts this one particular message is "3" and the other "2". I have no idea what is wrong, maybe wrong parameters are passed?
I'm not really sure what problem you're trying to solve, but just in case this JavaMail FAQ entry might be helpful.
multipart/mixed and multipart/related are very similar in that they have one main part and a bunch of other parts usually thought of as "attachments". Sometimes the disposition will tell you that it's an attachment, and sometimes it won't. Some mailers aren't very consistent in their use of disposition.
One of the unusual cases is multipart/alternative, but it doesn't sound like that's the problem you're running in to.

Resources