Is it possible that the data matrix characters to be hidden - datamatrix

I have an application where i stamp a data matrix code then it is read with a certain reade to a system to complete working, tgis code is so sensitive and i needed not to be copied but when i scan to a notepad or any other app the characters of the stamped code is shown and here is the problem as there could b someone to copy the code convert to a data matrix as well and print and this should not b done, so is there a possibility that the characters could not b shown if scanned so that they not b used again, thanks

Related

Sudoku Game: How to test code that includes reading a binary file

I am assigned to create a Sudoku game, where the 9x9 table should take the first settings from a binary file. Every registry contains 3 bytes with this format:
1st byte is row
2nd byte is column
3rd byte is the number that we wish to enter, in the combination above
for example: 069
On the 1st row and 7th column we put number 9.
My question here is, how should I test the code(?) when my only choice for example is me creating a binary file and reading it again.
First, what's so bad about creating your own repository of test files? You can even write a program in bash or something to run your sudoku game on all of your input files automatically to check if your code still works.
If you are working with FILE* pointers, though, you can likely use fmemopen in test code to create an in-memory stream that you can use with fread, etc.
If you are working with fds, you can do something similar with pipe. Write into one end, read from the other.

How to read a text based data file into an array in C?

I have to read a text based data file, with an unknown number of data points, into an array in C, but I can't work out how to do this. I can't even manage to get my program to successfully open the text file, let alone put it into an array etc
The file contains numerical values, so it is not a string it needs to be read into. Ideally this should be done by the user inputting the file name.
I basically need the program to:
Ask user to input file name (I understand this is just a simple printf job)
When the user inputs the file name, the program opens the text file, stores the data from it into an array of an appropriate size.
Print entire array to show that this has been done.
If anyone could give a step to step explanation of how this can be done I would really appreciate it.
Anything asked to be described step by step without asking your input would be copy of others work.
Best advice is to learn things step by step on your own.
File I/O in C: http://www.tutorialspoint.com/cprogramming/c_file_io.htm
If you want to add additional features like user input:How to read a string from user input in C, put in array and print
Do some research on file content and how it's being handled from program. (Seems that you are referring to ASCII format file).
You should have done some searching before asking this complexity level questions.
If you want same advice in future for this task, I suggest to add code here.

am i misunderstanding this~ Why is COPY changing the output? (PGPLSQL)

I'm very close to solving a big problem that i'm having at the moment and this is the last bit. I just don't know why the output is different to what's in the database when i need it EXACTLY the same.
Reading the data before entering the database:
[NULL][NULL][NULL][SO][etc.......]
Reading the data from COPY to text file:
\\000\\000\\000\\016[etc...} (it matches, basically)
Reading the data after COPY using binary format
PGCOPY
ÿ
[NULL][NULL][NULL][EOT][etc.......] (first line changes a fair bit)
(rest of the data stays exactly the same.)
˜ÿÿ
The postgresql query being run for test's sake:
COPY (SELECT byteacolumn FROM tablename WHERE id = 1) TO 'C:\path\file' (format:Binary);
So using the binary format gives me almost what i need but not quite. I could botch to ignore the added lines, but i wouldn't know what the first line of data should be.
TL;DR: COPY is adding lines and changing the first row of my data. How do i make it stop? :(
The binary COPY format is really only designed to be consumed by a COPY FROM command, so it contains a lot of metadata to allow Postgres to interpret it.
After the first two lines, the next 9 bytes are also part of the header. The next 2 bytes give the number of fields in the following record, and the next 4 bytes give the number of bytes in the following field. Only then does the actual data begin.
The full details of the format can be found in the documentation, but be aware that they could change in a future release.
(However, assuming this is the same problem you were asking about here, I think this is the wrong way to go about it. You were on the right track with lo_import/lo_export.)

How to search common passwords from two given files of size 20GB?

i have two files of size 20GB. i have to remove common passwords from either of one file.
i sorted the second file by calling sort command of UNIX. after this i splitted the sorted file into many files so that file could fit in RAM memory using split command. After splitting into n files i just used an structure array of size n to store first password of each splitted file and its corresponding file name.
then i applied binary search technique in that structure array for each key of first file to to first password stored in structure to get index of the corresponding file. and then i applied b search to that indexed splitted file.
i assumed 20 character as a max length of passwords
this program is not yet efficient.
Please help to make it efficient, if possible....
Please give me some advise to sort that 20GB file efficiently .....
in 64 bit stream with 8gb RAM and i3 quard processor.....
i just tested my program with two file of size 10MB. it took about 2.66 hours without using any optimization option. ....according to my program it will take about 7-8 hours to check each passwords of 20GB after splitting , sorting and binary searching.....
can i improve its time complexity? i mean can i make it to run more "faster"???
Check out external sorting. See http://www.umbrant.com/blog/2011/external_sorting.html which does have code at the end of the page (https://github.com/umbrant/extsort).
The idea behind external sorting is selecting and sorting equidistant samples from the file. Then partitioning the file at sampling points, sorting the partitions and merging the results.
example numbers = [1, 100, 2, 400, 60, 5, 0, 4]
example samples (distance 4) = 1, 60
chunks = {0,1,2,5,4} , {60, 100, 400}
Also, I don't think splitting the file is a good idea because you need to write 20GB to disk to split them. You might as well create the structure on the fly by seeking within the file.
For a previous SE question, "What algorithm to use to delete duplicates?" I described an algorithm for a probably-similar problem except with 50GB files instead of 20GB. The method is faster than sorting the big files in that problem.
Here is an adaptation of the method to your problem. Let's call the original two files A and B, and suppose A is larger than B. I don't understand from your problem description what is supposed to happen if or when a duplicate is detected, but in the following I assume you want to leave file A unchanged, and remove from B any items that also are in A. I also assume that entries within A are specified to be unique within A at the outset, and similarly for B. If that is not the case, the method needs more adapting and about twice as much I/O.
Suppose you can fit 1/k'th of file A into memory and still have room for the other required data structures. The whole file B can then be processed in k or fewer passes, as below, and this has a chance of being much faster than sorting either file, depending on line lengths and sort-algorithm constants. Sorting averages O(n ln n) and the process below is O(k n) worst case. For example, if lines average 10 characters and there are n = 2G lines, ln(n) ~ 21.4, likely to be about 4 times as bad as O(k n) if k=5. (Algorithm constants still can change the situation either way, but with a fast hash function the method has good constants.)
Process:
Let Q = B (ie rename or copy B to Q)
Allocate a few gigabytes for a work buffer W, and a gigabyte or so for a hash table H. Open input files A and Q, output file O, and temp file T. Go to step 2.
Fill work buffer W by reading from file A.
For each line L in W, hash L into H, such that H[hash[L]] indexes line L.
Read all of Q, using H to detect duplicates, writing non-duplicates to temp file T.
Close and delete Q, rename T to Q, open new temp file T.
If EOF(A), rename Q to B and quit, else go to step 2.
Note that after each pass (ie at start of step 6) none of the lines in Q are duplicates of what has been read from A so far. Thus, 1/k'th of the original file is processed per pass, and processing takes k passes. Also note that although processing will be I/O bound you can read and write several times faster with big buffers (eg 8MB) than line-by-line.
The algorithm as stated above does not include buffering details or how to deal with partial lines in big buffers.
Here is a simple performance example: Suppose A, B both are 20GB files, that each has about 2G passwords in it, and that duplicates are quite rare. Also suppose 8GB RAM is enough for work buffer W to be 4GB in size leaving enough room for hash table H to have say .6G 4-byte entries. Each pass (steps 2-5) reads 20% of A and reads and writes almost all of B, at each pass weeding out any password already seen in A. I/O is about 120GB read (1*A+5*B), 100GB written (5*B).
Here is a more involved performance example: Suppose about 1G randomly distributed passwords in B are duplicated in A, with all else as in previous example. Then I/O is about 100GB read and 70GB written (20+20+18+16+14+12 and 18+16+14+12+10, respectively).
Searching in external files is going to be painfully slow, even using binary search. You might speed it up by putting the data in an actual database designed for fast lookups. You could also sort both text files once and then do a single linear scan to filter out words. Something like the following pseudocode:
sort the files using any suitable sorting utility
open files A and B for reading
read wordA from A
read wordB from B
while (A not EOF and B not EOF)
{
if (wordA < wordB)
write wordA to output
read wordA from A
else if (wordA > wordB)
read wordB from B
else
/* match found, don't output wordA */
read wordA from A
}
while (A not EOF) /* output remaining words */
{
write wordA to output
read wordA from A
}
Like so:
Concatenate the two files.
Use sort to sort the total result.
Use uniq to remove duplicates from the sorted total.
If c++ it's an option for you, the ready to use STXXL should be able to handle your dataset.
Anyway, if you use external sort in c, as suggested by another answer, I think you should sort both files and then scan both sequentially. The scan should be fast, and the sort can be done in parallel.

Reading a list of numbers separated by comma into an array in C

I have a file of the following form
-1,1.2
0.3,1.5
Basically a list of vectors, where the dimension of the vectors is known but the number of vectors isn't. I need to read each vector into an array. In other words I need to turn
-1,1.2
into an array of doubles so that vector[0] == -1 , vector[1] == 1.2
I'm really not sure how to start.
There's three parts to the problem:
Getting access to the data in the file, i.e. opening it
Reading the data in the file
Tidying up, i.e. closing the file
The first and last part is covered in this tutorial as well as a couple of other things.
The middle bit can be done using formatted input, here's a example. As long as the input is well formatted, i.e. it is in the format you expect, then this will work OK. If the file has formatting errors in then this becomes trickier and you need to parse the file for formatting errors before converting the data.

Resources