find and delete lines in file python 3 - file

I use python 3
Okay, I got a file that lock like this:
id:1
1
34
22
52
id:2
1
23
22
31
id:3
2
12
3
31
id:4
1
21
22
11
how can I find and delete only this part of the file?
id:2
1
23
22
31
I have been trying a lot to do this but can't get it to work.

Is the id used for the decision to delete the sequence, or is the list of values used for the decision?
You can build a dictionary where the id number is the key (converted to int because of the later sorting) and the following lines are converted to the list of strings that is the value for the key. Then you can delete the item with the key 2, and traverse the items sorted by the key, and output the new id:key plus the formated list of the strings.
Or you can build the list of lists where the order is protected. If the sequence of the id's is to be protected (i.e. not renumbered), you can also remember the id:n in the inner list.
This can be done for a reasonably sized file. If the file is huge, you should copy the source to the destination and skip the unwanted sequence on the fly. The last case can be fairly easy also for the small file.
[added after the clarification]
I recommend to learn the following approach that is usefull in many such cases. It uses so called finite automaton that implements actions bound to transitions from one state to another (see Mealy machine).
The text line is the input element here. The nodes that represent the context status are numbered here. (My experience is that it is not worth to give them names -- keep them just stupid numbers.) Here only two states are used and the status could easily be replaced by a boolean variable. However, if the case becomes more complicated, it leads to introduction of another boolean variable, and the code becomes more error prone.
The code may look very complicated at first, but it is fairly easy to understand when you know that you can think about each if status == number separately. This is the mentioned context that captured the previous processing. Do not try to optimize, let the code that way. It can actually be human-decoded later, and you can draw the picture similar to the Mealy machine example. If you do, then it is much more understandable.
The wanted functionality is a bit generalized -- a set of ignored sections can be passed as the first argument:
import re
def filterSections(del_set, fname_in, fname_out):
'''Filtering out the del_set sections from fname_in. Result in fname_out.'''
# The regular expression was chosen for detecting and parsing the id-line.
# It can be done differently, but I consider it just fine and efficient.
rex_id = re.compile(r'^id:(\d+)\s*$')
# Let's open the input and output file. The files will be closed
# automatically.
with open(fname_in) as fin, open(fname_out, 'w') as fout:
status = 1 # initial status -- expecting the id line
for line in fin:
m = rex_id.match(line) # get the match object if it is the id-line
if status == 1: # skipping the non-id lines
if m: # you can also write "if m is not None:"
num_id = int(m.group(1)) # get the numeric value of the id
if num_id in del_set: # if this id should be deleted
status = 1 # or pass (to stay in this status)
else:
fout.write(line) # copy this id-line
status = 2 # to copy the following non-id lines
#else ignore this line (no code needed to ignore it :)
elif status == 2: # copy the non-id lines
if m: # the id-line found
num_id = int(m.group(1)) # get the numeric value of the id
if num_id in del_set: # if this id should be deleted
status = 1 # or pass (to stay in this status)
else:
fout.write(line) # copy this id-line
status = 2 # to copy the following non-id lines
else:
fout.write(line) # copy this non-id line
if __name__ == '__main__':
filterSections( {1, 3}, 'data.txt', 'output.txt')
# or you can write the older set([1, 3]) for the first argument.
Here the output id-lines where given the original number. If you want to renumber the sections, it can be done via a simple modification. Try the code and ask for details.
Beware, the finite automata have limited power. They cannot be used for the usual programming languages as they are not able to capture nested paired structures (like parenteses).
P.S. The 7000 lines is actually a tiny file from a computer perspective ;)

Read each line into an array of strings. The index number is the line number - 1. Check if the line equals "id:2" before you read the line. If yes, then stop reading the line until the line equals "id:3". After reading the line, clear the file and write the array back to the file until the end of the array. This may not be the most efficient way but should work.

if there isn't any values in between that would interfere this would work....
import fileinput
...
def deleteIdGroup( number ):
deleted = False
for line in fileinput.input( "testid.txt", inplace = 1 ):
line = line.strip( '\n' )
if line.count( "id:" + number ): # > 0
deleted = True;
elif line.count( "id:" ): # > 0
deleted = False;
if not deleted:
print( line )
EDIT:
sorry this deletes id:2 and id:20 ... yuo could modify it so that the first if checks - line == "id:" + number

Related

Reading a specific line in Julia

Since I'm new in Julia I have sometimes obvious for you problems.
This time I do not know how to read the certain piece of data from the file i.e.:
...
stencil: half/bin/3d/newton
bin: intel
Per MPI rank memory allocation (min/avg/max) = 12.41 | 12.5 | 12.6 Mbytes
Step TotEng PotEng Temp Press Pxx Pyy Pzz Density Lx Ly Lz c_Tr
200000 261360.25 261349.16 413.63193 2032.9855 -8486.073 4108.1669
200010 261360.45 261349.36 413.53903 22.925126 -29.762605 132.03134
200020 261360.25 261349.17 413.46495 20.373081 -30.088775 129.6742
Loop
What I want is to read this file from third row after "Step" (the one which starts at 200010 which can be a different number - I have many files which stars at the same place but from different integer) until the program will reach the "Loop". Could you help me please? I'm stacked - I don't know how to combine the different options of julia to do it...
Here is one solution. It uses eachline to iterate over the lines. The loop skips the header, and any empty lines, and breaks when the Loop line is found. The lines to keep are returned in a vector. You might have to modify the detection of the header and/or the end token depending on the exact file format you have.
julia> function f(file)
result = String[]
for line in eachline(file)
if startswith(line, "Step") || isempty(line)
continue # skip the header and any empty lines
elseif startswith(line, "Loop")
break # stop reading completely
end
push!(result, line)
end
return result
end
f (generic function with 2 methods)
julia> f("file.txt")
3-element Array{String,1}:
"200000 261360.25 261349.16 413.63193 2032.9855 -8486.073 4108.1669"
"200010 261360.45 261349.36 413.53903 22.925126 -29.762605 132.03134"
"200020 261360.25 261349.17 413.46495 20.373081 -30.088775 129.6742"

SPSS recoding variables data from multiple variables into boolean variables

I have 26 variables and each of them contain numbers ranging from 1 to 61. I want for each case of 1, each case of 2 etc. the number 1 in a new variable. If there is no 1, the variable should contain 2.
So 26 variables with data like:
1 15 28 39 46 1 12 etc.
And I want 61 variables with:
1 2 1 2 2 1 etc.
I have been reading about creating vectors, loops, do if's etc but I can't find the right way to code it. What I have done is just creating 61 variables and writing
do if V1=1 or V2=1 or (etc until V26).
recode newV1=1.
end if.
exe.
**repeat this for all 61 variables.
recode newV1 to newV61(missing=2).
So this is a lot of code and quite a detour from what I imagine it could be.
Anyone who can help me out with this one? Your help is much appreciated!
noumenal is correct, you could do it with two loops. Another way though is to access the VECTOR using the original value though, writing that as 1, and setting all other values to zero.
To illustrate, first I make some fake data (with 4 original variables instead of 26) named X1 to X4.
*Fake Data.
SET SEED 10.
INPUT PROGRAM.
LOOP Id = 1 TO 20.
END CASE.
END LOOP.
END FILE.
END INPUT PROGRAM.
VECTOR X(4,F2.0).
LOOP #i = 1 TO 4.
COMPUTE X(#i) = TRUNC(RV.UNIFORM(1,62)).
END LOOP.
EXECUTE.
Now what this code does is create four vector sets to go along with each variable, then uses DO REPEAT to actually refer to the VECTOR stub. Then finishes up with RECODE - if it is missing it should be coded a 2.
VECTOR V1_ V2_ V3_ V4_ (61,F1.0).
DO REPEAT orig = X1 TO X4 /V = V1_ V2_ V3_ V4_.
COMPUTE V(orig) = 1.
END REPEAT.
RECODE V1_1 TO V4_61 (SYSMIS = 2).
It is a little painful, as for the original VECTOR command you need to write out all of the stubs, but then you can copy-paste that into the DO REPEAT subcommand (or make a macro to do it for you).
For a more simple illustration, if we have our original variable, say A, that can take on integer values from 1 to 61, and we want to expand to our 61 dummy variables, we would then make a vector and then access the location in that vector.
VECTOR DummyVec(61,F1.0).
COMPUTE DummyVec(A) = 1.
For a record if A = 10, then here DummyVec10 will equal 1, and all the others DummyVec variables will still by system missing by default. No need to use DO IF for 61 values.
The rest of the code is just extra to do it in one swoop for multiple original variables.
This should do it:
do repeat NewV=NewV1 to NewV61/vl=1 to 61.
compute NewV=any(vl,v1 to v26).
end repeat.
EXPLANATION:
This syntax will go through values 1 to 61, for each one checking whether any of the variables v1 to v26 has that value. If any of them do, the right NewV will receive the value of 1. If none of them do, the right NewV will receive the value of 0.
Just make sure v1 to v26 are consecutively ordered in the file. if not, then change to:
compute NewV=any(vl,v1, v2, v3, v4 ..... v26).
You need a nested loop: two loops - one outer and one inner.

Pass Array to Subroutine IDL

I have a very long lookup table (~40,000 lines) that I am using for my code. Currently, I have it set to grab 4 arrays from my lookup table in the subroutine that uses it, but I call that subroutine ~3,000 times. I would rather not waste processing time grabbing this table as arrays repeatedly. Is there a way to grab them in my main program, store them, and source them later in my subroutine?
My current code grabs the lookup table in 4 separate arrays of 39,760 lines, and I am currently calling it like this:
READCOL, 'LookupTable2.txt', F='D,D,D,D',Albedo, Inertia, NightT, DayT
EDIT: I should probably note I have IDL 6.2, but if there is a way to do it in a newer version, I would still appreciate knowing how.
EDIT 2: My current program has a function which saves 4 arrays and executes the main function. Can I call my function with arrays as an argument? That way I wouldn't have to keep creating the same array
Something like:
Pro
FUNC(Array1, Array2, Var1, Var2, Var3)
END
There are several ways you can do this.
It looks like you have four columns and 40,000 lines, correct?
Then you can do the following. First, I will assume there is no header data in the ASCII file for the following commands.
FUNCTION read_my_file,file_name
;; Assume FILE_NAME is full path to and including file name with extension
fname = file_name[0]
;; One could also find the file with the following
;; fname = FILE_SEARCH([path to file],[file name with extension])
;; Define the number of lines in the file
nl = FILE_LINES(fname[0])
;; Define empty arrays to fill
col1 = DBLARR(nl[0])
col2 = DBLARR(nl[0])
col3 = DBLARR(nl[0])
col4 = DBLARR(nl[0])
dumb = DBLARR(4)
;; Open file
OPENR,gunit,fname[0],ERROR=err,/GET_LUN
IF (err NE 0) THEN PRINT, -2, !ERROR_STATE.MSG ;; Prints an error message
FOR n=0L, nl[0] - 1L DO BEGIN
;; Read in file data
READF,gunit,FORMAT='(4d)',dumb
;; Fill arrays
col1[n] = dumb[0]
col2[n] = dumb[1]
col3[n] = dumb[2]
col4[n] = dumb[3]
ENDFOR
;; Close file
FREE_LUN,gunit
;; Define output
output = [[col1],[col2],[col3],[col4]]
;; Return to calling routine
RETURN,output
END
Note that this will work better if you provide an explicit width for the format statement, e.g., '(4d15.5), which means a 15 character input with 5 decimal places.
This will return col1 through col4 to the user or calling routine as an [N,4]-element array, e.g., col1 = output[*,0]. You could use a structure where each tag contains one of the colj arrays or you could return them through keywords.
Then you can pass these arrays to another function/program in the following way:
PRO my_algorithm_wrapper,file_name,RESULTS=results
;; Get data from files
columns = read_my_file(file_name)
;; Pass data to [algorithm] function
results = my_algorithm(columns[*,0],columns[*,1],columns[*,2],columns[*,3])
;; Return to user
RETURN
END
To call this from the command line (after making sure both routines are compiled), you would do something like the following:
IDL> my_algorithm_wrapper,file_name,RESULTS=results
IDL> HELP,results ;; see what the function my_algorithm.pro returned
The above code should work with IDL 6.2.
General Notes
Try to avoid using uppercase letters in IDL routine names as it can cause issues when IDL searches for the routine during a call or compilation statement.
You need to name the program/function in the line with the PRO/FUNCTION statement at the beginning of the file. The name must come immediately after the PRO/FUNCTION statement.
It is generally wise to use explicit formatting statements to avoid ambiguities/errors when reading data files.
You can pass any variable type (e.g., scalar integer, array, structure, object, etc.) to programs/functions so long as they are handled appropriately within the program/function.

Ruby 1.92 in Rails 3: A Case where Array.length Does Not Equal Array.count?

My understanding is that count and length should return the same number for Ruby arrays. So I can't figure out what is going on here (FactoryGirl is set to create--save to database--by default):
f = Factory(:family) # Also creates one dependent member
f.members.count # => 1
f.members.length # => 1
m = Factory(:member, :family=>f, :first_name=>'Sam') #Create a 2nd family member
f.members.count # => 2
f.members.length # => 1
puts f.members # prints a single member, the one created in the first step
f.members.class # => Array
f.reload
[ Now count == length = 2, and puts f.members prints both members]
I vaguely understand why f needs to be reloaded, though I would have expected that f.members would involve a database lookup for members with family_id=f.id, and would return all the members even if f is stale.
But how can the count be different from the length? f.members is an Array, but is the count method being overridden somewhere, or is the Array.count actually returning a different result from Array.length? Not a pressing issue, just a mystery that might indicate a basic flaw in my understanding of Ruby or Rails.
In looking at the source, https://github.com/rails/rails/blob/master/activerecord/lib/active_record/associations/collection_association.rb, length calls the size method on the internal collection and count actually calls count on the database.

Algorithm to find "most common elements" in different arrays

I have for example 5 arrays with some inserted elements (numbers):
1,4,8,10
1,2,3,4,11,15
2,4,20,21
2,30
I need to find most common elements in those arrays and every element should go all the way till the end (see example below). In this example that would be the bold combination (or the same one but with "30" on the end, it's the "same") because it contains the smallest number of different elements (only two, 4 and 2/30).
This combination (see below) isn't good because if I have for ex. "4" it must "go" till it ends (next array mustn't contain "4" at all). So combination must go all the way till the end.
1,4,8,10
1,2,3,4,11,15
2,4,20,21
2,30
EDIT2: OR
1,4,8,10
1,2,3,4,11,15
2,4,20,21
2,30
OR anything else is NOT good.
Is there some algorithm to speed this thing up (if I have thousands of arrays with hundreds of elements in each one)?
To make it clear - solution must contain lowest number of different elements and the groups (of the same numbers) must be grouped from first - larger ones to the last - smallest ones. So in upper example 4,4,4,2 is better then 4,2,2,2 because in first example group of 4's is larger than group of 2's.
EDIT: To be more specific. Solution must contain the smallest number of different elements and those elements must be grouped from first to last. So if I have three arrrays like
1,2,3
1,4,5
4,5,6
Solution is 1,1,4 or 1,1,5 or 1,1,6 NOT 2,5,5 because 1's have larger group (two of them) than 2's (only one).
Thanks.
EDIT3: I can't be more specific :(
EDIT4: #spintheblack 1,1,1,2,4 is the correct solution because number used first time (let's say at position 1) can't be used later (except it's in the SAME group of 1's). I would say that grouping has the "priority"? Also, I didn't mention it (sorry about that) but the numbers in arrays are NOT sorted in any way, I typed it that way in this post because it was easier for me to follow.
Here is the approach you want to take, if arrays is an array that contains each individual array.
Starting at i = 0
current = arrays[i]
Loop i from i+1 to len(arrays)-1
new = current & arrays[i] (set intersection, finds common elements)
If there are any elements in new, do step 6, otherwise skip to 7
current = new, return to step 3 (continue loop)
print or yield an element from current, current = arrays[i], return to step 3 (continue loop)
Here is a Python implementation:
def mce(arrays):
count = 1
current = set(arrays[0])
for i in range(1, len(arrays)):
new = current & set(arrays[i])
if new:
count += 1
current = new
else:
print " ".join([str(current.pop())] * count),
count = 1
current = set(arrays[i])
print " ".join([str(current.pop())] * count)
>>> mce([[1, 4, 8, 10], [1, 2, 3, 4, 11, 15], [2, 4, 20, 21], [2, 30]])
4 4 4 2
If all are number lists, and are all sorted, then,
Convert to array of bitmaps.
Keep 'AND'ing the bitmaps till you hit zero. The position of the 1 in the previous value indicates the first element.
Restart step 2 from the next element
This has now turned into a graphing problem with a twist.
The problem is a directed acyclic graph of connections between stops, and the goal is to minimize the number of lines switches when riding on a train/tram.
ie. this list of sets:
1,4,8,10 <-- stop A
1,2,3,4,11,15 <-- stop B
2,4,20,21 <-- stop C
2,30 <-- stop D, destination
He needs to pick lines that are available at his exit stop, and his arrival stop, so for instance, he can't pick 10 from stop A, because 10 does not go to stop B.
So, this is the set of available lines and the stops they stop on:
A B C D
line 1 -----X-----X-----------------
line 2 -----------X-----X-----X-----
line 3 -----------X-----------------
line 4 -----X-----X-----X-----------
line 8 -----X-----------------------
line 10 -----X-----------------------
line 11 -----------X-----------------
line 15 -----------X-----------------
line 20 -----------------X-----------
line 21 -----------------X-----------
line 30 -----------------------X-----
If we consider that a line under consideration must go between at least 2 consecutive stops, let me highlight the possible choices of lines with equal signs:
A B C D
line 1 -----X=====X-----------------
line 2 -----------X=====X=====X-----
line 3 -----------X-----------------
line 4 -----X=====X=====X-----------
line 8 -----X-----------------------
line 10 -----X-----------------------
line 11 -----------X-----------------
line 15 -----------X-----------------
line 20 -----------------X-----------
line 21 -----------------X-----------
line 30 -----------------------X-----
He then needs to pick a way that transports him from A to D, with the minimal number of line switches.
Since he explained that he wants the longest rides first, the following sequence seems the best solution:
take line 4 from stop A to stop C, then switch to line 2 from C to D
Code example:
stops = [
[1, 4, 8, 10],
[1,2,3,4,11,15],
[2,4,20,21],
[2,30],
]
def calculate_possible_exit_lines(stops):
"""
only return lines that are available at both exit
and arrival stops, discard the rest.
"""
result = []
for index in range(0, len(stops) - 1):
lines = []
for value in stops[index]:
if value in stops[index + 1]:
lines.append(value)
result.append(lines)
return result
def all_combinations(lines):
"""
produce all combinations which travel from one end
of the journey to the other, across available lines.
"""
if not lines:
yield []
else:
for line in lines[0]:
for rest_combination in all_combinations(lines[1:]):
yield [line] + rest_combination
def reduce(combination):
"""
reduce a combination by returning the number of
times each value appear consecutively, ie.
[1,1,4,4,3] would return [2,2,1] since
the 1's appear twice, the 4's appear twice, and
the 3 only appear once.
"""
result = []
while combination:
count = 1
value = combination[0]
combination = combination[1:]
while combination and combination[0] == value:
combination = combination[1:]
count += 1
result.append(count)
return tuple(result)
def calculate_best_choice(lines):
"""
find the best choice by reducing each available
combination down to the number of stops you can
sit on a single line before having to switch,
and then picking the one that has the most stops
first, and then so on.
"""
available = []
for combination in all_combinations(lines):
count_stops = reduce(combination)
available.append((count_stops, combination))
available = [k for k in reversed(sorted(available))]
return available[0][1]
possible_lines = calculate_possible_exit_lines(stops)
print("possible lines: %s" % (str(possible_lines), ))
best_choice = calculate_best_choice(possible_lines)
print("best choice: %s" % (str(best_choice), ))
This code prints:
possible lines: [[1, 4], [2, 4], [2]]
best choice: [4, 4, 2]
Since, as I said, I list lines between stops, and the above solution can either count as lines you have to exit from each stop or lines you have to arrive on into the next stop.
So the route is:
Hop onto line 4 at stop A and ride on that to stop B, then to stop C
Hop onto line 2 at stop C and ride on that to stop D
There are probably edge-cases here that the above code doesn't work for.
However, I'm not bothering more with this question. The OP has demonstrated a complete incapability in communicating his question in a clear and concise manner, and I fear that any corrections to the above text and/or code to accommodate the latest comments will only provoke more comments, which leads to yet another version of the question, and so on ad infinitum. The OP has gone to extraordinary lengths to avoid answering direct questions or to explain the problem.
I am assuming that "distinct elements" do not have to actually be distinct, they can repeat in the final solution. That is if presented with [1], [2], [1] that the obvious answer [1, 2, 1] is allowed. But we'd count this as having 3 distinct elements.
If so, then here is a Python solution:
def find_best_run (first_array, *argv):
# initialize data structures.
this_array_best_run = {}
for x in first_array:
this_array_best_run[x] = (1, (1,), (x,))
for this_array in argv:
# find the best runs ending at each value in this_array
last_array_best_run = this_array_best_run
this_array_best_run = {}
for x in this_array:
for (y, pattern) in last_array_best_run.iteritems():
(distinct_count, lengths, elements) = pattern
if x == y:
lengths = tuple(lengths[:-1] + (lengths[-1] + 1,))
else :
distinct_count += 1
lengths = tuple(lengths + (1,))
elements = tuple(elements + (x,))
if x not in this_array_best_run:
this_array_best_run[x] = (distinct_count, lengths, elements)
else:
(prev_count, prev_lengths, prev_elements) = this_array_best_run[x]
if distinct_count < prev_count or prev_lengths < lengths:
this_array_best_run[x] = (distinct_count, lengths, elements)
# find the best overall run
best_count = len(argv) + 10 # Needs to be bigger than any possible answer.
for (distinct_count, lengths, elements) in this_array_best_run.itervalues():
if distinct_count < best_count:
best_count = distinct_count
best_lengths = lengths
best_elements = elements
elif distinct_count == best_count and best_lengths < lengths:
best_count = distinct_count
best_lengths = lengths
best_elements = elements
# convert it into a more normal representation.
answer = []
for (length, element) in zip(best_lengths, elements):
answer.extend([element] * length)
return answer
# example
print find_best_run(
[1,4,8,10],
[1,2,3,4,11,15],
[2,4,20,21],
[2,30]) # prints [4, 4, 4, 30]
Here is an explanation. The ...this_run dictionaries have keys which are elements in the current array, and they have values which are tuples (distinct_count, lengths, elements). We are trying to minimize distinct_count, then maximize lengths (lengths is a tuple, so this will prefer the element with the largest value in the first spot) and are tracking elements for the end. At each step I construct all possible runs which are a combination of a run up to the previous array with this element next in sequence, and find which ones are best to the current. When I get to the end I pick the best possible overall run, then turn it into a conventional representation and return it.
If you have N arrays of length M, this should take O(N*M*M) time to run.
I'm going to take a crack here based on the comments, please feel free to comment further to clarify.
We have N arrays and we are trying to find the 'most common' value over all arrays when one value is picked from each array. There are several constraints 1) We want the smallest number of distinct values 2) The most common is the maximal grouping of similar letters (changing from above for clarity). Thus, 4 t's and 1 p beats 3 x's 2 y's
I don't think either problem can be solved greedily - here's a counterexample [[1,4],[1,2],[1,2],[2],[3,4]] - a greedy algorithm would pick [1,1,1,2,4] (3 distinct numbers) [4,2,2,2,4] (two distinct numbers)
This looks like a bipartite matching problem, but I'm still coming up with the formulation..
EDIT : ignore; This is a different problem, but if anyone can figure it out, I'd be really interested
EDIT 2 : For anyone that's interested, the problem that I misinterpreted can be formulated as an instance of the Hitting Set problem, see http://en.wikipedia.org/wiki/Vertex_cover#Hitting_set_and_set_cover. Basically the left hand side of the bipartite graph would be the arrays and the right hand side would be the numbers, edges would be drawn between arrays that contain each number. Unfortunately, this is NP complete, but the greedy solutions described above are essentially the best approximation.

Resources