Script is dropping colon from drive letter - arrays

I am attempting to write a script that will identify if one of multiple USB drives are connected in order to create a variable that can be used in a script so I don't have to change variables every time I copy a script to one of these flash drives.
$USBlibrary = #("Test1","Test2")
$USBcurrent = (GWmi Win32_LogicalDisk | Where-Object {$_.VolumeName -in $USBlibrary} | % {$_.DeviceID})
$mydrive = $($USBcurrent[0])
Write-Host "$mydrive"
When indexing the $USBcurrent variable, if multiple drives are connected, it outputs correctly as "D:" or "E:", however if only one drive is connected (which would be the norm) then they system drops the colon from the variable. My understanding is that the system is attempting to read the colon as defining scope, but I have been unable to find a solution to keep this from happening.

That's because $USBCurrent is a single string, not an array. Change (GWmi ...) to #(GWmi ...) to force array output - Mathias R. Jessen

Related

Process two strings per line, data and process in single bash script

I have a working solution already with a while read IFS processing a csv file, but I'd like to have it all in a single bash script as the input data never changes.
The data is a list of ip addresses and names;
10.0.0.1,server1
10.0.0.2,server2
172.16.0.1,server3
192.168.0.1,server4
The process itself will run a ping/curl/wget as required, all the while echoing out which server and test it is doing.
I can run the IP list on its own in the same file using a list function and reading the items, but then I don't get the server friendly name echoed out. So my question is, how should I approach this? I was thinking create the data array then parse in to a read somehow and split the tokens, but not sure how. Thought about writing the data out to a temp file then reading it in again and deleting the tmp file afterwards, but this seems messy. Any pointers appreciated.
In terms of a working solution (if someone wanted to provide instead of just advising), the output of the above data could just be echoed out like this;
Testing: $server, IP address: $ip, test 1.
Then I will just sub the tests as required.
Thanks
If you want to include your data directly in a script instead of reading it from a separate file, and you're already using a loop to read the existing data, the easiest way is probably just copy and pasting the data file's contents into a heredoc that the loop reads instead:
#!/usr/bin/env bash
declare -i testno=1 # Make testno an integer variable that uses arithmetic expansion
while IFS=, read -r ip server; do
echo "Testing: $server, IP address: $ip, test $testno"
testno+=1
done <<EOF
10.0.0.1,server1
10.0.0.2,server2
172.16.0.1,server3
192.168.0.1,server4
EOF
which will display
Testing: server1, IP address: 10.0.0.1, test 1
Testing: server2, IP address: 10.0.0.2, test 2
Testing: server3, IP address: 172.16.0.1, test 3
Testing: server4, IP address: 192.168.0.1, test 4

Extracting and comparing only a certain column of a file

I need to write a PowerShell script, that allows the user to pass a txt file, that contains the standard information you'd get from a
Get-Process > proc.txt
statement as a parameter and then compare the processes in the file with the currently running ones. I then need to display Id, Name, Starting time and running time for every process, that isn't in the txt file and therefore a new process.
To give you a general idea of how I would approach this: I would
Extract only the names of the processes from the txt file into a variable (v1).
Save only the names of all the currently running processes in a variable (v2).
Compare the 2 variables(v1, v2) and write the processes that are not in the txt file (the new ones) into yet another variable (v3).
Get the process ID, the starting time and the running time for each process name in v3 and output all of that (including name) into the console and in a new file.
First of all, how can I only read the names of the processes from the txt file? I tried to find it on the internet but had no luck.
Secondly how can I save only the new processes in a variable and not all the differences (e.g. processes that are in the file but currently not running).
As far as I know,
Compare-Object
returns all the differences.
Thirdly how can I get the remaining process information I want from all the process names in v3?
And finally how can I then neatly combine ID, starting time, running time and the names from v3 in one file?
I'm pretty much a beginner at PowerShell programming, I'm pretty sure my 4 step approach posted above is most likely wrong and therefore appreciate any help I can get.

How to use .bat formatting to batch-format unicode files to ANSI files?

Total beginner to .bat programming, so please bear with me:
I've been trying to convert a massive database of Unicode files collected from scientific instruments to ANSI format. Furthermore, I need to convert all these files to .txt files.
Now, the second part is pretty trivial -- I used to do it with the "Bulk Rename Utility", and I've been able to make it work so far, I think.
The first part should be pretty straight forward, and I've found multiple different similar questions, but they all seem to be for powershell, a single file, or end in long discussions about the specific encoding being used. One question seems to match mine exactly, but having tried their suggested code, only half the file seems to transfer fine, the other half comes through as nonsense code. I've been using the code:
for %%F in (*.001) do ren "*SS.001" "*SS1.001"
for %%F in (*.001) do type "%%F" >"%%~nF.txt"
and then deleting/moving the extra files.
I've converted the files by hand successfully in the past (left), but the current encoding seems to be failing (right):
Side by side comparison of files encoded by hand vs by program
My questions are:
Is it possible that a single file I get from my instrument is in
multiple encodings (part UTF-8, part UTF-16), and that this is
messing up my program (or more likely, i'm using an encoding that is
too small)? If this is the case, I'd understand why the special
characters like the squareds and the degree symbol are breaking, but
not the data, which is just numbers.
Is there some obvious typo in my code that is causing this bizarre
error?
If the error might be embedded in what unicode (8 vs 16 vs 32) or
ANSI (1252 vs ???) I'm using, how would I check?
How would I fix this code to work?
If there's any better questions I should be asking or additional information I need to add, please let me know. Thank you!!
Is it possible that a single file I get from my instrument is in multiple encodings (part UTF-8, part UTF-16), and that this is messing up my program (or more likely, i'm using an encoding that is too small)?
I don't believe a single file can contain multiple encodings.
Is there some obvious typo in my code that is causing this bizarre error?
The cmd environment can handle different code pages easily enough, but it struggles with multi-byte encodings and byte order marks. Indeed, this is a common problem when trying to read WMI results returned in UCS-2 LE. Although there exists a pure batch workaround for sanitizing WMI results, it unfortunately doesn't work universally with every other encoding.
If the error might be embedded in what unicode (8 vs 16 vs 32) or ANSI (1252 vs ???) I'm using, how would I check? How would I fix this code to work?
.NET is much better at sanely dealing with files of unknown encodings. The StreamReader class, when it reads its first character, will read the BOM and detect the file encoding automatically. I know you were hoping to avoid a PowerShell solution, but PowerShell really is the easiest way to access IO methods to handle these files transparently.
There is a simple way to incorporate PowerShell hybrid code into a batch script though. Save this with a .bat extension and see whether it does what you want.
<# : batch portion
#echo off & setlocal
powershell -noprofile "iex (${%~f0} | out-string)"
goto :EOF
: end batch / begin PowerShell hybrid #>
function file2ascii ($infile, $outfile) {
# construct IO streams for reading and writing
$reader = new-object IO.StreamReader($infile)
$writer = new-object IO.StreamWriter($outfile, [Text.Encoding]::ASCII)
# copy infile to ASCII encoded outfile
while (!$reader.EndOfStream) { $writer.WriteLine($reader.ReadLine()) }
# output summary
$encoding = $reader.CurrentEncoding.WebName
"{0} ({1}) -> {2} (ascii)" -f (gi $infile).Name, $encoding, (gi $outfile).Name
# Garbage collection
foreach ($stream in ($reader, $writer)) { $stream.Dispose() }
}
# loop through all .001 files and apply file2ascii()
gci *.001 | %{
$outfile = "{0}\{1}.txt" -f $_.Directory, $_.BaseName
file2ascii $_.FullName $outfile
}
While it's true that this could process could be simplified using the get-content and out-file cmdlets, the IO stream methods demonstrated above will avoid your having to load the entire data file into memory -- a benefit if any of your data files is large.

Does /usr/bin/perl run as separate processes when invoked with one script multiple times?

It's a web server scenario. Linux is the OS. Different IP Addresses call the same script.
Does Linux start a new process of Perl for every script call or does Perl run the multiple calls interleaved or do the scripts run serially (one after another)?
Sorry, I didn't find an answer within the first few results of Google.
I need to know in order to know how much I have to worry about concurrent database access.
The script itself is NOT using multiple threads, it's a straightforward Perl script.
Update: more sophisticated scheduler or serialiser of comment to answer below, still untested:
#! usr/bin/perl
use Fcntl;
use Time::HiRes;
sysopen(INDICATOR, "indicator", O_RDWR | O_APPEND); # filename hardcoded, must exist
flock(INDICATOR, LOCK_EX);
print INDICATOR $$."\n"; # print a value unique to this script
# in single-process context it's serial anyway
# in multi-process context the pid is unique
seek(INDICATOR,0,0);
my $firstline = <INDICATOR>;
close(INDICATOR);
while("$firstline" ne $$."\n")
{
nanosleep(1000000); # time your script to find a better value
open(INDICATOR, "<", "indicator");
$firstline = <INDICATOR>;
close(INDICATOR);
}
do "transferHandler.pl"; # name of your script to run
sysopen(INDICATOR, "indicator", O_RDWR);
flock(INDICATOR, LOCK_EX);
my #content = <INDICATOR>;
shift #content;
truncate(INDICATOR,0);
seek(INDICATOR,0,0);
foreach $line(#content)
{
print INDICATOR $line;
}
close(INDICATOR);
Edit again: above script would not work if perl runs in a single process and threads (interleaves) scripts itself. Such a scenario is the only one of the 3 asked by me which appears not to be the case based on the answer and feedback I got verbally separately. Above script can be made to work then by changing the unique value to a random number rather than the pid on quick thought however.
It is completely depended on set up of your web server. Does it uses plain CGI, FastCGI, mod_perl? You can set up both of scenarios you've described. In case of FastCGI you can also set up for a script to never exit, but do all its work inside a loop that keeps accepting connections from frontend web server.
Regarding an update to your question, I suggest you to start worrying about concurrent access from very start. Unless you're doing some absolutely personal application and deliberately set up your server to strictly run one copy of your script, pretty much any other site will sometime grow into something that will require 2 or more parallel processing scripts. You will save yourself a lot of headache if you plan this very common task ahead. Even if you only have one serving script, you will need indexing/clean up/whatever done by offline tasks and this will mean concurrent access once again.
If the perl scripts as invoked separately they will result in separate processes. Here is an demo using two scripts:
#master.pl
system('perl ./sleep.pl &');
system('perl ./sleep.pl &');
system('perl ./sleep.pl &');
#sleep.pl
sleep(10);
Then run:
perl tmp.pl & (sleep 1 && ps -A | grep perl)

How to manage reports/files distribution to different destinations in Unix?

The reporting tools will generate a huge numbers of reports/files in the file system (a Unix directory). There's a list of destinations (email addresses and shared folders) where a different set of reports/files (can have overlap) are required to be distributed at each destinations.
Would like to know if there's a way to efficiently manage this reports delivery using shell scripts so that the maintenance of the list of reports and destinations will not become a mess in future.
It's quite an open ended question, the constraint however is that it should work within the boundaries of managing the reports in a Unix FS.
You could always create a simple text file (report_locations.txt here) with names/locations where reports go i.e.
ReportName1;/home/bob
ReportName2;/home/jim,/home/jill
ReportName3;/home/jill,/home/bob
The report names will always be the first field in this example and locations where the corresponding reports should go follow, delimited by commas (or any other delimiter you like).
Then read that file with a shell script (I like to use for loops for this sort of operation):
#!/usr/bin/ksh93
for REPORT in $(cut -d";" -f1 report_locations.txt)
do
LISTS=$(grep ${REPORT} report_locations.txt | cut -d";" -f2)
for LIST in ${LISTS}
do
DIRS=$(echo ${LIST} | tr ',' '\n')
for DIR in ${DIRS}
do
echo "Copying ${REPORT} to ${DIR}"
cp -f ${REPORT} ${DIR}
done
done
done
The use of for loops may be a bit excessive (I get caught up in them), but it gets the job done.
Not sure this is what you would be looking for, but it is a starting point if anything. Don't hesitate to ask if you need any explanation of the code.

Resources