Can you change the default output from SLURM's squeue command? - default

The default output from SLURM is:
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
I'd like it have the QOS too:
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) QOS
Is there a way to change the default so I don't have to specify the option every time?

You simply set the SQUEUE_FORMAT environment variable with the options you specify on the command line.
Exemple:
export SQUEUE_FORMAT="%.18i %.9P %.8j %.8u %.2t %.10M %.6D %.20R %q"
Write the above line in your .bash_profile file and you will always have the additional QOS column in your output.

Related

Creating a batch file (or something else maybe?) that scans a .txt file and then copies the specified text to another .txt file

I own a small minecraft server, and I would like to create a google spreadsheet for calculating user playtime data. I wan't this data because it would help let me know if my advertising campaigns are working or not. You can try to eyeball this stuff, but a solid data set would be alot more effective than guessing if the advertising is effective. The problem lies in the fact that manually searching for data from the server logs is really hard. I would appreciate anyone who could help me build a simple script or something that reads a .txt file and extracts the data I need. The script needs to be able to:
Detect lines with "User Authenticator" and "Disconnected" then print the entire line.
Format the text in some way? Possibly alphabetize the lines so that were not all over the place looking for specific users logins and logouts, defeating the purpose of the script. Not sure if this is possible.
Exclude lines with certain text (usernames), we want normal player data, not admin data.
I am sorry if did anything wrong, this is my first time on the site.
UPDATE: The admin data would be stored in a file called "admins.txt". By "alphabetizing" i meant it, example: Player A joins at 06:00, Player B joins at 06:30, then, Player A leaves at 06:45, Player B leaves at 07:00. If the data was flat, it would end up reading something like: A: 6:00, B: 6:30, A:6:45, B:7:00. But I would rather it be: A: 6:00, A: 6:45, B: 6:30, B: 7:00. That would make it easier to chart it out and make a calculation. Sorry for the long text.
Also typical server logging looks like this:
[15:46:30] [User Authenticator #1/INFO]: UUID of player DraconicPiggy is (UUID)
[15:46:31] [Server thread/INFO]: DraconicPiggy[/(Ip address)] logged in with entity id 157 at ([world]342.17291451961574, 88.0, -32.04791955684438)
The following awk script will report only on the two line types that you mentioned.
/User Authenticator|Disconnected/ {
print
}
I'm guessing "alpabetize" means sort. If so then you can pass the awk output to sort via a pipe.
awk -f script.awk | sort
I'm assuming the file is a log that's already in date-time sequence, with the timestamps at the start of the line. In this case you'll need to tell sort what to sort on.sort /? will tell you how to do this.
Multiple input files
To process all log files in the current directory use:
awk -f script.awk *.log
Redirect output to file
The simplest way is by adding > filtered.log to the command, like this:
awk -f script.awk *.log > filtered.log
That will filter all the input files into a single output file. If you need to write one filtered log for each input file then a minor script change is needed:
/User Authenticator|Disconnected/ {
print >> FILENAME ".filtered.log"
}
Filtering admins and redirecting to several files
The admins file should be similar to this:
Admin
DarkAdmin
PinkAdmin
The admin names must not contain spaces. i.e DarkAdmin is OK, but Dark Admin woud not work. Similarly the user names in your log files must not contain spaces for this script to work.
Execute the following script with this command:
awk -f script.awk admins.txt *.log
Probably best to make sure the log files and the filtered output are in separate directories.
NR == FNR {
admins[ $1 ] = NR
next
}
/User Authenticator|Disconnected/ {
if ( $8 in admins ) next
print >> FILENAME ".filtered.log"
}
The above script will:
Ignore all lines that mention an admin.
Creat a filtered version of every log file. i.e. if there are 5 log files then 5 filtered logs will be created.
Sorting the output
You have two sort keys in the file, the user and the time. This is beyone the capabilities of the standard Windows sort program, which seens very primitive. Yo should be able to do it with Gnu Sort:
sort --stable --key=8 test.log > sorted_test.log
Where:
--key=8 tells it to sort on field 8 (user)
--stable keeps the files in date order within each user
Example of sorting a log file and displaying the result:
terry#Envy:~$ sort --stable --key=8 test.log
[15:23:30] [User Authenticator #1/INFO]: UUID of player Doris is (UUID)
[16:36:30] [User Disconnected #1/INFO]: UUID of player Doris is (UUID)
[15:46:30] [User Authenticator #1/INFO]: UUID of player DraconicPiggy is (UUID)
[16:36:30] [User Disconnected #1/INFO]: UUID of player DraconicPiggy is (UUID)
[10:24:30] [User Authenticator #1/INFO]: UUID of player Joe is (UUID)
terry#Envy:~$

Fail Flink Job if source/sink/operator has undefined uid or name

In my jobs I'd like every source/sink/operator should to have uid and name property defined for easier identification.
operator.process(myFunction).uid(MY_FUNCTION).name(MY_FUNCTION);
Right now I need to manually review every job to detect missing settings. How can I tell Flink to fail job if any name or uid is not defined?
Once you get a StreamExecutionEnvironment you can get the graph of the operators.
When you don't define a name Flink autogenerates one for you. In addition if you set a name, in case at least of sources or sinks, Flink adds a prefix Source: or Sink: to the name.
When you don't define a uid, the uid value in the graph at this stage is null.
Given your scenario, where the name and uid are always the same, to check all operator have been provided with the name and uid you can do the following:
getExecutionEnvironment().getStreamGraph().getStreamNodes().stream()
.filter(streamNode -> streamNode.getTransformationUID() == null ||
!streamNode.getOperatorName().contains(streamNode.getTransformationUID()))
.forEach(System.out::println);
This snippet will print all the operator that doesn't match with your rules.
This won't work in the 100% of cases, like using a uid which is a substring of the name. But you have here a general way to access to the operators information and apply the filters that fits in your case and perform your own strategy.
This snippet can de used as part of your CI or use it directly in your application.

Command in command in Redis

I just started Redis. I need to create a database for online store or whatever. Main idea to show functionality. I never worked in Redis and terminal, so a bit confusing. Firstly I want to create a database of users with id users counter:
SET user:id 1000
INCR user:id
(integer) 1001
Can I use somehow command in command like:
HMSET incr user:id username "Lacresha Renner" gender "female" email "renner#gmail.com"
(error) ERR wrong number of arguments for HMSET
in case that my database automatically count new users in database. Or it's not possible in Redis? Should I do it by hands, like user:1, user:2, user:n?
I am working in terminal (MacOS).
HMSET receives a key name, and pairs of elements names and values.
Your first argument (incr) is invalid, and the id part of the second should be explicit id.
e.g.:
HMSET user:1000 username "Lacresha Renner" gender "female" email "renner#gmail.com"
Regarding your first SET, you should have one key that its whole purpose is a running uid, you should use the reply of INCR as the new UID for the new user HASH key name (1000 in the above example).
If you never delete users, the value of the running UID will be the number of users in your system. If you do delete users, you should also insert the UID into a SET and remove the UID once you delete the user. In that case, an SCARD should give you the number of users in your system, and SMEMBERS (or SSCAN) will give you all of their UIDs.

How do I find the size of a DB2 (luw) database?

I know you can look at the size of an uncompressed backup, but that's not practical.
Is there a command to find the size of the database while it is online? (In Linux/Unix/windows)
When connected to a database as db2admin (or with similar permissions), use the following command:
call get_dbsize_info(?,?,?,-1);
The first three parameters are output parameters:
Value of output parameters
--------------------------
Parameter Name : SNAPSHOTTIMESTAMP
Parameter Value : 2014-06-17-13.59.55.049000
Parameter Name : DATABASESIZE
Parameter Value : 334801764352
Parameter Name : DATABASECAPACITY
Parameter Value : 1115940028416
Return Status = 0
The size is given in bytes, so divide by 1024^3 to get Gb.
The final parameter is how often the snapshot is refreshed. -1 is to use default settings.
Further reading...
Note: This command does not take into account logs, etc. - so, it may appear much larger on disk.
Use db2top
l(for session)
p(when press small p it will show the total size of db n used size of db)
For specific schema, in KBytes, use:
SELECT sum(TOTAL_P_SIZE) FROM (
SELECT TABNAME, (DATA_OBJECT_P_SIZE + INDEX_OBJECT_P_SIZE + LONG_OBJECT_P_SIZE +
LOB_OBJECT_P_SIZE + XML_OBJECT_P_SIZE) as TOTAL_P_SIZE
FROM SYSIBMADM.ADMINTABINFO
WHERE TABSCHEMA='PUBLIC'
)
Reference: https://www.ibm.com/support/pages/how-do-i-find-out-disk-space-usage-managing-server-octigate-database-tables
Following command will show you memory used by database online :
db2pd -dbptnmem
You can monitor variety of stuff with db2pd command :
https://www.ibm.com/docs/en/db2/11.1?topic=commands-db2pd-monitor-troubleshoot-db2-engine-activities

get the default email from the user on a Linux box

Is there any way to programmatically get the current user's email address?
I know the email is usually user#hostname but is there any I can get the email?
I know how to get the username and the hostname so I can build it myself, but I want to be sure that I get the email address even when the email is not user#hostname.
Code in C is appreciated.
Thanks
There is no such standard mapping of user account to email address - at least not for ordinary /etc/passwd derived accounts. Consider that a user might not even have an email address.
Nobody's mentioned the GECOS fields in the /etc/passwd file.
You'll notice that the fifth field in your entry in /etc/passwd is either blank, or a comma-separated list the first element of which is your full name. Originally in Bell Labs (before the days of email) the GECOS fields were:
User's full name (or application name, if the account is for a
program)
Building and room number or contact person
Office telephone
number
Any other contact information (pager number, fax, etc.)
Some Linux distributions store the user's default email address in the 4th GECOS field, and if your system doesn't do this by default, you can set it up yourself. Ordinary users without superuser privilege can edit their GECOS fields using the command line command chfn. To access this field, you can then do
grep ${USER}: /etc/passwd | awk -F\: '{print $5}' | awk -F\, '{print $4}'
or whatever floats your boat in your language of choice (No, I am NOT going to write C. This is the twenty-first century!).
There is no standard mapping of user accounts to RFC822 (i.e. user#domain) email addresses. Generally, a default setup of typical mail transfer agents will accept local mail to addresses without a domain and deliver it to the user account of the same name. But even that can't be relied on, as you may not even have an MTA.
The UNIX way of doing this is to send email through the local mail-transfer-agent - simply invoking /usr/bin/mail is enough. The system administrator is responsible for configuring the local MTA to make sure email works properly.
If you want to send email to the local user, just send it to their username - if they read their email somewhere other than locally, the MTA should be configured to forward it to them.
If you just want to use the right "from" email address when sending email on behalf of a local user, so they get replies in the right place - again, just use their username. The MTA should be configured to do the right translation.
This way of doing things is good, because it means that this configuration only has to be done in one place (the MTA), rather than having to manually configure every single application on the box that sends or recieves email.
Just to complement Simon's answer and given I don't have enough reputation to make a comment on it, GECOS stands for General Comprehensive Operating System aka General Electric Comprehensive Operating Supervisor and the most portable way I found to get the user GECOS field (As it might not be defined in your /etc/passwd file directly depending on your system's configuration) is the following:
getent passwd <USERNAME> | awk -F ':' '{print $5}'
It depends how the user is stored. In a simple passwd file there's no email address, only a username. But you can have additional information with other authentication method like LDAP or SQL.
Prompt the user for their email. If you have no guarantee that the email is user#hostname, then how else do you expect to determine what their email is other than asking them?
You can't get the actual email address in any standard way. I would try to send the mail to just username. Chanses that it will end up on the correct domain are actually not that bad ...
Check in the terminal you're using, that is :
root#peter-laptop#
for root users it is shown before the # sign, that is
root#peter-laptop or peter#peter-laptop# for user peter
Try to get to /var/mail/ and there you should have a file for each user that has (not all users have to have it) an email address. And you can indeed read the mail from those files.
Then you can redirect the mail to anywhere else with the sendmail tool.

Resources