xmlstarlet - extracting Salesforce data for controlled fields - salesforce

I'm looking for a way to extract data from Salesforce config dump for controlled fields. Dump is XML file, where I have following information:
<?xml version="1.0" encoding="UTF-8"?>
<valueSet>
<valueSettings>
<controllingFieldValue>Product A</controllingFieldValue>
<controllingFieldValue>Product B</controllingFieldValue>
<valueName>1</valueName>
</valueSettings>
<valueSettings>
<controllingFieldValue>Product A</controllingFieldValue>
<valueName>2</valueName>
</valueSettings>
</valueSet>
It means:
for 'Product A' and 'Product B' value 1 is allowed;
for 'Product A' value 2 is allowed.
So for 'Product A' value 1 and 2 is allowed.
For 'Product B' only value 1 is allowed.
I would like to create list of allowed values per product.
I'm trying with this:
xmlstarlet sel -T -t -m '//valueSettings' -i 'controllingFieldValue[text()="Product A"]' -v 'controllingFieldValue[text()]' -o '|' -v 'valueName[text()]' -n test.xml
Output is:
Product A
Product B|1
Product A|2
So also other controllingFieldValue are displayed for valueSettings node, where matching controllingFieldValue was found.
How to display only controllingFieldValue matching criteria with corresponding valueName?

I was able to get proper result using:
xmlstarlet sel -T -t -m '//valueSettings' -i 'controllingFieldValue[text()="Product A"]' -v 'controllingFieldValue[text()="Product A"]' -o '|' -v 'valueName[text()]' -n test.xml
Output is:
Product A|1
Product A|2

Related

How can I get the names of all namespaces containing the word "nginx" and store those names in an array

Basically I want to automate this task where I have some namespaces in Kubernetes I need to delete and others that I want to leave alone. These namespaces contain the word nginx. So I was thinking in order to do that I could get the output of get namespace using some regex and store those namespaces in an array, then iterate through that array deleting them one by one.
array=($(kubectl get ns | jq -r 'keys[]'))
declare -p array
for n in {array};
do
kubectl delete $n
done
I tried doing something like this but this is very basic and doesn't even have the regex. But I just left it here as an example to show what I'm trying to achieve. Any help is appreciated and thanks in advance.
kubectl get ns doesn't output JSON unless you add -o json. This:
array=($(kubectl get ns | jq -r 'keys[]'))
Should result in an error like:
parse error: Invalid numeric literal at line 1, column 5
kubectl get ns -o json emits a JSON response that contains a list of Namespace resources in the items key. You need to get the metadata.name attribute from each item, so:
kubectl get ns -o json | jq -r '.items[].metadata.name'
You only want namespaces that contain the word "nginx". We could filter the above list with grep, or we could add that condition to our jq expression:
kubectl get ns -o json | jq -r '.items[]|select(.metadata.name|test("nginx"))|.metadata.name'
This will output your desired namespaces. At this point, there's no reason to store this in array and use a for loop; you can just pipe the output to xargs:
kubectl get ns -o json |
jq -r '.items[]|select(.metadata.name|test("nginx"))|.metadata.name' |
xargs kubectl delete ns
kubectl get ns
output
NAME STATUS AGE
default Active 75d
kube-public Active 75d
kube-system Active 75d
oci-service-operator-system Active 31d
olm Active 31d
command
kubectl get ns --no-headers | awk '{if ($1 ~ "de") print $1}'
Output
default
kube-node-lease
this will give you a list of namespaces
array=$(kubectl get ns --no-headers | awk '{if ($1 ~ "de") print $1}')
Testing
bash-4.2$ array=$(kubectl get ns --no-headers | awk '{if ($1 ~ "de") print $1}')
bash-4.2$ echo $array
default kube-node-lease
bash-4.2$ for n in $array; do echo $n; done
default
kube-node-lease
bash-4.2$

Return SQL Query as bash array

First Post. Bash novice. Couldn't find an effective solution.
Looking for an efficient parsing method / alternative method
My initial attempt: (+ edit thanks to #larks)
services=($($PSQL "SELECT array(select name from services);"))
echo ${services[#]}
>array -------------------------------- {Shampoo,Dying,Long-cut} (1 row)
echo ${#services[#]}
>5
echo ${services[2]}
>{Shampoo,Dying,Long-cut}
I'm looking to end up with an array identical to the ones below but without creating a csv in the process.
echo $($PSQL "\copy (select name from services) to 'services.csv';")
readarray arr -t a < services.csv
echo ${arr[#]}
>Shampoo Dying Long-cut
echo ${#services[#]}
>3
Your services variable is not an array; to create an array you need to surround the value with (...). For example, compare this:
$ example=$(echo one two three)
$ echo ${example[0]}
one two three
With this:
$ example=( $(echo one two three) )
$ echo ${example[0]}
one
So assuming that your $PSQL command generates output in an appropriate format, you want:
services=( $($PSQL "SELECT array(select name from services);") )
For what you're trying to do in your question, I don't see any reason to use the array function. Given a table like this:
CREATE TABLE services (
id serial primary key,
name text
);
INSERT INTO services (name) VALUES ('foo');
INSERT INTO services (name) VALUES ('bar');
INSERT INTO services (name) VALUES ('qux');
A query like this will produce results amendable for turning into a bash array:
$ psql -t --csv -U postgres -d arraytest -c 'select name from services'
foo
bar
qux
In a bash script:
services=( $(psql -t --csv -U postgres -d arraytest -c 'select name from services') )
for service in "${services[#]}"; do
echo "SERVICE: $service"
done
Which produces:
SERVICE: foo
SERVICE: bar
SERVICE: qux

How to write a query to find all tables in a db that have a specific column name? HIVE

I've got a database with about 400 tables and I need to find all the tables searching by the column names. Basically I need something like:
select <tables> from <database> where table.columnName='tv';
How can I do this?
below shell script will give you desired result:
output=""|
hive -e 'show tables in <database_name>' |
while read line
do
echo "TABLE NAME : $line"
if eval "hive -e 'describe <database_name>.$line'" | grep -q "tv"; then
output+="Required table name: $line"'\n'
else
output+=""
fi
echo -e "$output"
done

BASH store values in an array and check difference of each value

[CentOS, BASH, cron] Is there a method to declare variants that would keep even when system restarts?
The scenario is to snmpwalk interface I/O errors and store the values in an array. A cron job to snmpwalk again, say 5 mins later, would have another set of values. I would like to compare them with previous corresponding value of each interface. If the difference exceeds the threshold (50), an alert would generate.
So the question is: how to store an array variable that would lost in the system? and how to check the difference of each value in two arrays?
UPDATE Mar 16, 2012 I attach my final script here for your reference.
#!/bin/bash
# This script is to monitor interface Input/Output Errors of Cisco devices, by snmpwalk the error values every 5 mins, and send email alert if incremental value exceeds threshold (e.g. 500).
# Author: Wu Yajun | Created: 12Mar2012 | Updated: 16Mar2012
##########################################################################
DIR="$( cd "$( dirname "$0" )" && pwd )"
host=device.ip.addr.here
# Check and initiate .log file storing previous values, create .tmp file storing current values.
test -e $DIR/host1_ifInErrors.log || snmpwalk -c public -v 1 $host IF-MIB::ifInErrors > $DIR/host1_ifInErrors.log
snmpwalk -c public -v 1 $host IF-MIB::ifInErrors > $DIR/host1_ifInErrors.tmp
# Compare differences of the error values, and alert if diff exceeds threshold.
# To exclude checking some interfaces, e.g. Fa0/6, Fa0/10, Fa0/11, change the below "for loop" to style as:
# for i in {1..6} {8..10} {13..26}
totalIfNumber=$(echo $(wc -l $DIR/host1_ifInErrors.tmp) | sed 's/ \/root.*$//g')
for (( i=1; i<=$totalIfNumber; i++))
do
currentValue=$(cat $DIR/host1_ifInErrors.tmp | sed -n ''$i'p' | sed 's/^.*Counter32: //g')
previousValue=$(cat $DIR/host1_ifInErrors.log | sed -n ''$i'p' | sed 's/^.*Counter32: //g')
diff=$(($currentValue-$previousValue))
[ $diff -ge 500 ] && (ifName=$(echo $(snmpwalk -c public -v 1 $host IF-MIB::ifName.$i) | sed 's/^.*STRING: //g') ; echo "ATTENTION - Input Error detected from host1 interface $ifName" | mutt -s "ATTENTION - Input Error detected from host1 interface $ifName" <email address here>)
done
# Store current values for next time checking.
snmpwalk -c public -v 1 $host IF-MIB::ifInErrors > $DIR/host1_ifInErrors.log
Save the variables in a file. Add a date stamp:
echo "$(date)#... variables here ...." >> "$file"
Read the last values from the file:
tail -1 "$file" | cut "-d#" -f2 | read ... variables here ....
That also gives you a nice log file where you can monitor the changes. I suggest to always append to the file, so you can easily see when the service is down/didn't run for some reason.
To check for changes, you can use an simple if
if [[ "...old values..." != "...new values..." ]]; then
send mail
fi

SQL Server BCP: How to put quotes around all fields?

I have this BCP command:
'bcp DBName..vieter out c:\test003.txt -c -T /t"\",\"" -S SERVER'
The output CSV I get does not put quotes around the field names, instead it puts it around the commas! How can I get the /t"\",\"" to put quotes around all fields.
Thanks all
Setting the row terminator in addition to the field terminator should do the trick
'bcp DBName..vieter out c:\test003.txt -c -T -t"\",\"" -r"\"\n\"" -S SERVER'
This will likely work, but miss off the leading " for the first field of the first line, and perhaps the last field of the last line - I'm not sure, just guessing really, no server here!
or try using QUOTENAME to wrap text fields (you could also wrap numbers, but that isn't normally required.)
'bcp "SELECT id, age, QUOTENAME(name,'"') FROM DBName..vieter" queryout c:\test003.txt -c -T -t"," -S SERVER'
You need to use CHAR(34) for the quote. This page has more details: http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=153000
Alternatively, if you are fine for Powershell based script, you can try with below code, which does automatic quoting.
Invoke-sqlcmd -ConnectionString "Server=SERVERNAME, `
3180;Database=DATABASENAME;Trusted_Connection=True;" `
-Query "SET NOCOUNT ON;SELECT * FROM TABLENAME" -MaxCharLength 700 | `
Export-Csv -NoTypeInformation -path C:\temp\FileName.csv -Encoding UTF8
bcp "SELECT char(34) + * +char(34) FROM atable queryout "C:\temp\out.csv" -T -N -c /t"\",\""
This will put quotes before and after each field (including the first and the last).
Here are the list of commands i used .
BCP "DECLARE #colnames VARCHAR(max);SELECT #colnames = COALESCE(#colnames + ',', '') + column_name from databaseName.INFORMATION_SCHEMA.COLUMNS where TABLE_NAME='tableName'; select #colnames;" queryout "C:\HeadersOnly.csv" -r"\n\"" -c -T -Uusername -Ppassword -SserverName
bcp databaseName.schema.tableName out "C:\EmployeeDatawithoutheaders.csv" -T -t"\",\"" -r"\"\n\"" -c -Uusername -Ppassword -SserverName
copy /b C:\HeadersOnly.csv+C:\EmployeeDatawithoutheaders.csv C:\EmployeeData.csv
del C:\HeadersOnly.csv
del C:\EmployeeDatawithoutheaders.csv
I guess your goal was to clearly seperate field values by using an unique identifier so that import procedure doesn't have an issue.
I had same issue and found this workaroud useful: Using an unusual field terminator, for example | or even a string /#/ can be very unique and shouldn't mess with your string content. You also can HEX-Values (limited, see https://learn.microsoft.com/en-us/sql/tools/bcp-utility?view=sql-server-2017)
export
bcp DB.dbo.Table out /tmp/output2.csv -c -t "/#/" -U sa -P secret -S localhost
import
bcp TargetTable in /tmp/output2.csv -t "/#/" -k -U sa -P secret -S localhost -d DBNAME -c -b 50000
The actual workable answer, that removes the leading quote, is to :
A) generate format file with bcp :
bcp db.schema.tabel format nul -c -x -f file.xml -t"\",\"" -r"\"\r\n" -T -k
B) edit that file to manually copy field 1 to field 0 above, as the first field, set Max_Length=1 and remove the separator and one quot the was in field1
<FIELD ID="0" xsi:type="CharTerm" TERMINATOR="\"" MAX_LENGTH="1" COLLATION="SQL_Latin1_General_CP1_CI_AS"/>
The trick works, as you are adding a field (interface to the file) to detect the first seprator, which results in an always null-value, but not add a row (interface for the query output).

Resources